Author(s): Jake Samuel
Mentor(s): Ali Beheshti, Mechanical Engineering
Abstract
Machine learning has been explored as a method of identifying material properties from the material’s indentation data in a process called inverse analysis. A paper by Lu Lu, et. al examines machine learning techniques that could aid this process by adding a residual connection in a neural network (MFRN) [4]. This work examines how this technique improves inverse analysis for small samples of high fidelity data. The MFRN was compared to gaussian process regression, a multi-fidelity model that is well established. It was found that adding a residual connection lowered error significantly for inverse analysis for smaller samples of data.
Audio Transcript
Hello, my name is Jake Samuel and I’m here to talk about using machine learning aided nano indentation to discover material properties. So first let me get some background. Material property testing can be a long and expensive process. It can be hard to test on materials that may be small, or if you’re using a destructive test on a thin film material, you might not want that. So another way to go about for testing material properties is using nano indentation. So nano indentation is the process of adding a small indent in a material and then you’re measuring the loading and the unloading, the force over depth. And then if we were to know the material’s microstructure and the relationship of that between its properties, we can, we can find the material properties. However, that is a, that can be quite a hard process to do rigorously. So instead, machine learning techniques have been applied to this kind of problem with pretty good success, as machine learning models are able to find patterns which can be, that might not be obvious for humans. So the machine learning model we’re going to be examining in this model is a neural network which is just composed of, as you can see here, some hidden layers which have some kind of activation function. So each of them do a little math, and then you have some number of inputs which all lead to some number of outputs and essentially functions as a black box. And you can train it over very large data sets. But the problem is that if you were to train it on indentation data, an indentation test is also, you know, it’s a test you have to run. It can take a long time to perform. And if you are in the business of data collection, you know that data can become corrupted or you can have bad data really easily, and indentation is no different. If you have a long period of testing for indentation, you are likely to have bad data if the test gets messed up. So there is a paper by Lu lu that examines how certain machine learning techniques can be improved. So an example of this is multi fidelity, where if you have, not a lot of data points, you can supplement it with some, a lot of lower quality data points. For example, for indentation, if you don’t have a lot of experimental data points, you can supplement them with, sorry, you can supplement them with some simulations. And one of the innovations that is described in this paper is adding a residual connection to improve the neural network. So you will see some math here. So I’ll try to explain it really quickly. Multi fidelity uses some kind of linear function and a nonlinear function in combination. To learn both the linear and nonlinear relationships between the high and the low fidelity data. And what Lu lu describes here is this residual connection alpha-L Y-L, which in theory should make it easier to learn from data that might already be connected. So we’re going to compare this to Gaussian process regression which is a popular machine learning algorithm. It has been used since 2001 and multi fidelity models of Gaussian process regression have existed for a long while. So you can, we can call this the standard in multi fidelity machine learning. So in the work I have done is I have compared how I’ve tried to replicate the machine learning model proposed by Lu lu and compared it with traditional Gaussian processing. So here we see on when using simple simulation data. So this is from Fem2D and the high fidelity is Fem3D. In the orange, our multi fidelity model with the residual outperforms our multi fidelity Gaussian processing quite significantly. Now for any of you familiar with machine learning, you might think that the the learning curve is a bit flat. Usually you’ll see a curve similar to a downward slope like here. And the reason for this is because since this is strictly simulation data, it is really easy for a machine learning model to learn it. Think of it as a computer that’s spitting out data would, intuitively it would be easy for a computer to learn from it. And when measuring with mean average percent error, the residual does significantly better. With almost 10% MAPE. The MFGP gets around 90% MAPE. Now doing. Replicating these results on actual experimental data gives us some interesting figures. Here we have again in orange the multi fidelity model with the residual connection. And, and as you can see it does a lot better in lower data set sizes. So up until you get high fidelity data set size of 6, which I think is like 60 or 600, I think it’s 60, 60 high fidelity data points. Our residual connection does a lot better. But it evens out to be about the same. The traditional Gaussian processing regression does do a little bit better at 0.5% MAPE while the residual connection network plateaus at 1% MAPE. So from this we can learn that using a residual connection as proposed by Lu lu seems to hold promise for learning from low size data sets and low fidelity data sets. Now this is important because it could greatly reduce costs and, if you are running into problems with data collection, it is a useful thing to know. And for the future, future work will explore how physical data from indentations. Here we have a mark of an indentation and we can learn the indentation depth and the indentation width from it, how that could potentially relate to material properties such as creep. And this is relevant because this data is a lot less, it gives you a lot less information than if you would do like get data normally from nano-indentation tests. I would like to give a big thank you to Dr. Ali Beheshti and Shaheen Mahmood for supporting me in the lab, for Dr. Karen Lee and the OSCAR program for giving me the opportunity to do my research, and thank you for watching.