lululxvi / deep-learning-for-indentation Goto Github PK
View Code? Open in Web Editor NEWExtraction of mechanical properties of materials through deep learning from instrumented indentation
License: Apache License 2.0
Extraction of mechanical properties of materials through deep learning from instrumented indentation
License: Apache License 2.0
Hello! Could you possibly add to the README instructions on how to replicate the results in your PNAS paper? Also, fit_n.py
loads several .dat
files that I don't see in src/
. Are they available somewhere else?
Hi, MFNN is built based on low-fidelity data and high-fidelity data.
I'm writing to ask what's the size of test dataset using high-fidelity data for the purpose of predictive accuracy.
If high-fidelity data based on experiment is not enough (a few), then how to validate the accuracy of built MFNN?
Best!
hello! when I run nn.py, a mistake take place. OSError: [Errno 22] Invalid argement: "E*.dat" how to get the E*.data document
Prof Lu, hello!
I try to replicatie mfnn with pytorch. However, i am not sure how to train it.
In the mfnn without residual in the ref.34, the train process is training the low fidelity model sperately (only update the low model params), then training the high fidelity model (update two part parameters). For example,20000 epochs for low model,then 20000 epochs for high model.
In this work, it seems training simultaneously(if low fidelity data, only update low model; if high fidelity data, update two model parameters)in a epoch.
In the E* prediction,only the low model([3, 128, 128, 1]),it can get good result with mape < 5%, But when I train high fidelity model, on the contrary,it get poor result.
In the sigma y prediction, it has no convergence trend.
Another problem is the mfnn with residual,
Is it can be replacedby a linear+tanh with input [linear part, nonlinear part] ?
(I can not understand how to train the two alpha parameters)
Thanks for sharing the code with community.
It seems there were major updates on DeepXDE package since this code was published, so would you please update this code
for indentation problem accordingly?
If install the newest DDE, I got the error "module 'deepxde' as no attribute 'apply'", but if I return to v0.8.0, the problem will be gone.
Many thanks.
Thanks for sharing this novel algorithm to identify the material properties.
I still have a few questions about transfer learning:
For the function “validation_exp_cross_transfer”, it seems that only the 2D and 3D FEM dataset are involved in the Pre-training. Can we consider the experimental dataset to be involved in pre-training and saved as a model file?
If I want to perform a 2DFEM+3DFEM+EXP training network as in Figure 4 and apply it to the identification of properties of other unknown materials, how can this be achieved?
What is the difference between the function “validation_exp_cross2”and function “validation_exp_cross3”? It looks like the input and output of these two functions are the same.
Thanks.
Hello Lu,
First of all many thanks for your efforts and for making deepxde available, it is really useful.
I am trying to reimplement the multifidelity network alone in Pytorch. As I am not very familiar with TensorFlow, I am struggling a bit to properly understand. I would be happy if you could give me some advice.
As I understood, at least when using tensorflow backend, the number of epochs corresponds to a single pass over all samples on a dataset. In the case of multifidelity networks, we have the low and multifidelity datasets (forget about the experimental dataset for now) which, if I understood well, are employed simultaneously to train the low and high fidelity 'portions' of the multifidelity model. My quesiton is, what do you do when for example one dataset has less points than the other?
Takes this as an example: the high fidelity network need as input the 'low fidelity prediction' of a low fidelity sample 'concatenated' with a high fidelity sample. What if one of the datasets, lets say the low fid has already used all samples? Do you starting sampling from the dataset again till the total number of iterations (epochs over the dataset) is done? Do you use sample replacement or do you use all samples them start sampling from the dataset again?
Thanks in advance.
Dear lulu,
Thank you for your paper on "Extraction of mechanical properties... " and your open source codes. They provide me a novel approach to investigate mechanical issues.
I have run your code and feel confused about how the transfer learning works. Specifically, I cannot find how the first training influences the second training of multi-fidelity NN in the code of nn.py.
Thanks for your time and I am looking forward to your reply.
Hello Prof. Lu!
I am trying to replicate the results in Fig S1 A for which I am running the validation_model("sigma_y", 10) varying the train size 10 to 20, 30, ..., 10,000 manually each time I run it.
My question is whether the MAPE on the y-axis in Fig S1 A is the mean value of the MAPE as the output of the validation_model. Running the model for tain_size=10 I get the mean value=1274.4625 which is not in the order of magnitude of MAPE in Fig S1 A. I have run it for tain_size up tp 100 and no close value to what it is in Fig S1 A. I would appreciate your response on this please.
Result:
sigma_y 10
1274.4625 689.9193
I have the same issue with Fig S1 B.
Thanks for sharing this novel approach to study the material properties. I'm a beginner at the neural network and have some stupid questions about how to read the .dat files outputted from nn.py code.
Hello professor, thanks for your kindness and your work.
I'm a student following your code work, and I faced some issues while operating mfnn code.
I want to ask you some questions.
When I operate validation_exp(), I obtain two MAPE and STD with two vector form.
"sigma_y
[52.81542692 20.47970335] [11.47603269 2.44488563]"
I'm expecting the 52.815 as MAPE of lowfidelity NN output, and 20.479 as MAPE of total NN output.
Can you tell me either my expectation is right? if it's not, can you tell me what that features mean?
I have operated 2 cases of validation_exp() - (1) Al6061, (2) B3067, and I obtained significantly different MAPE with Sy against your paper work.
For case(1) Al6061, Paper(Fig. 4. A) suggested about 15% MAPE with NN(2D+3D). But when I try to get MAPE for NN(2D+3D), I get significantly large MAPE of it. Below is the result of my code operation.
"sigma_y
[3683.31674364 102.5687206 ] [3102.81756024 72.03417236]"
For case(2) B3067, Paper(Fig. 6. B) suggested 105.5% MAPE with NN(raw). But when I try to get MAPE for NN(raw), I get different MAPE value as before in case(1). Below is the result of my code operation.
"sigma_y
[52.81542692 20.47970335] [11.47603269 2.44488563]"
I'm confused with the two different attitude of same code: in case(1)-Al6061, it gives me larger MAPE but in case(2)-B3067, it gives me smaller MAPE. I want to know why I'm getting a wrong MAPE value with validation_exp() operation. Was there any training dataset adjustments when you validate those values in paper?
this is different with that of validation_exp(),
res = dde.utils.apply(mfnn, (data,))
ape.append(res[:2])
First code gives only one MAPE value, but second code gives two MAPE values and it seems first value among two values in second code is equal to the value that is given by first code.
In Fig. 3. D in your paper with High-fidelity Training Data Set Size of 12, MAPE is suggested about 22%, and if I operate code with two different codes, I obtain
22.575627271690337 11.91528043968248 and
[22.18273957 9.71804237] [12.79467543 12.11059052].
If my expectation is right as Q1, the value of MAPE in this case should be 9.718 but paper suggested 22 for the result. If my expectation is right, can you tell me why did you used 22% of MAPE as result of Fig. 3. D?
These are the questions and issues I met with operating your code work. Thanks again with your hard work and giving access to your code work.
Sincerely, Koo.
Hello,
I was trying to execute your code, and I'm facing an issue with a certail function call. the deepdxe has been imported as dde, and the function in discussion is dde.apply, now the error that I face is the deepdxde has no attribute 'apply'.
Please help me through.
Hello Professor,
I am trying to present the results of Figure S1 A from your paper, specifically focusing on the values of E*.
I am using DeepXDE version 1.1.2 with TensorFlow 2.7.0 as the backend. The data is generated by executing the gen-forward code in model.py.
Is the MAPE value on the Y-axis of Figure S1 directly calculated in the code, or should it be separately computed from the result files?
The average MAPE values are excessively high. Can you suggest any potential issues causing this?
Thank you.
Hello!
First of all, thank you for publishing the code and the data for this paper.
There is an issue using "nn.py". If I run the code without choosing anything in "main()", it works. But if i uncomment anything in "main()" my PC is caught in an infinite loop with always the same output in the Spyder console, but unfortunately I don't know why. Maybe because the packages have been updated since publishing the code. Do you know the package versions you used? I am very grateful if you could please have a look at the attached screenshots and possibly offer a solution.
In one of the screenshots I marked a line in the error message. Maybe thats part of the problem.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.