Giter VIP home page Giter VIP logo

deep-learning-for-indentation's People

Contributors

lululxvi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

deep-learning-for-indentation's Issues

How to replicate paper's results

Hello! Could you possibly add to the README instructions on how to replicate the results in your PNAS paper? Also, fit_n.py loads several .dat files that I don't see in src/. Are they available somewhere else?

MFNN test dataset

Hi, MFNN is built based on low-fidelity data and high-fidelity data.

I'm writing to ask what's the size of test dataset using high-fidelity data for the purpose of predictive accuracy.
If high-fidelity data based on experiment is not enough (a few), then how to validate the accuracy of built MFNN?

Best!

ESOEER

hello! when I run nn.py, a mistake take place. OSError: [Errno 22] Invalid argement: "E*.dat" how to get the E*.data document

replicate problem, train process

Prof Lu, hello!
I try to replicatie mfnn with pytorch. However, i am not sure how to train it.

In the mfnn without residual in the ref.34, the train process is training the low fidelity model sperately (only update the low model params), then training the high fidelity model (update two part parameters). For example,20000 epochs for low model,then 20000 epochs for high model.

In this work, it seems training simultaneously(if low fidelity data, only update low model; if high fidelity data, update two model parameters)in a epoch.
deepxde 1.1.12 source code

In the E* prediction,only the low model([3, 128, 128, 1]),it can get good result with mape < 5%, But when I train high fidelity model, on the contrary,it get poor result.
image

In the sigma y prediction, it has no convergence trend.

Another problem is the mfnn with residual,
$$y_{H}=a_{L} y_{L}+\epsilon(tanh α_{1} . f_{linear}(x, y_L)+tanh α_{2} . f_{nonlinear}(x,y_L))$$

Is it can be replacedby a linear+tanh with input [linear part, nonlinear part] ?
(I can not understand how to train the two alpha parameters)

Not compatible with most recent version of DeepXDE

Thanks for sharing the code with community.

It seems there were major updates on DeepXDE package since this code was published, so would you please update this code
for indentation problem accordingly?

If install the newest DDE, I got the error "module 'deepxde' as no attribute 'apply'", but if I return to v0.8.0, the problem will be gone.

Many thanks.

Code issues of the Transfer learning

Thanks for sharing this novel algorithm to identify the material properties.

I still have a few questions about transfer learning:

  1. For the function “validation_exp_cross_transfer”, it seems that only the 2D and 3D FEM dataset are involved in the Pre-training. Can we consider the experimental dataset to be involved in pre-training and saved as a model file?

  2. If I want to perform a 2DFEM+3DFEM+EXP training network as in Figure 4 and apply it to the identification of properties of other unknown materials, how can this be achieved?

  3. What is the difference between the function “validation_exp_cross2”and function “validation_exp_cross3”? It looks like the input and output of these two functions are the same.

Thanks.

Dataset and feeding

Hello Lu,

First of all many thanks for your efforts and for making deepxde available, it is really useful.

I am trying to reimplement the multifidelity network alone in Pytorch. As I am not very familiar with TensorFlow, I am struggling a bit to properly understand. I would be happy if you could give me some advice.

As I understood, at least when using tensorflow backend, the number of epochs corresponds to a single pass over all samples on a dataset. In the case of multifidelity networks, we have the low and multifidelity datasets (forget about the experimental dataset for now) which, if I understood well, are employed simultaneously to train the low and high fidelity 'portions' of the multifidelity model. My quesiton is, what do you do when for example one dataset has less points than the other?

Takes this as an example: the high fidelity network need as input the 'low fidelity prediction' of a low fidelity sample 'concatenated' with a high fidelity sample. What if one of the datasets, lets say the low fid has already used all samples? Do you starting sampling from the dataset again till the total number of iterations (epochs over the dataset) is done? Do you use sample replacement or do you use all samples them start sampling from the dataset again?

Thanks in advance.

Questions on the implementation of transfer learning.

Dear lulu,

Thank you for your paper on "Extraction of mechanical properties... " and your open source codes. They provide me a novel approach to investigate mechanical issues.

I have run your code and feel confused about how the transfer learning works. Specifically, I cannot find how the first training influences the second training of multi-fidelity NN in the code of nn.py.

Thanks for your time and I am looking forward to your reply.

replicating the results - MAPE

Hello Prof. Lu!
I am trying to replicate the results in Fig S1 A for which I am running the validation_model("sigma_y", 10) varying the train size 10 to 20, 30, ..., 10,000 manually each time I run it.
image

My question is whether the MAPE on the y-axis in Fig S1 A is the mean value of the MAPE as the output of the validation_model. Running the model for tain_size=10 I get the mean value=1274.4625 which is not in the order of magnitude of MAPE in Fig S1 A. I have run it for tain_size up tp 100 and no close value to what it is in Fig S1 A. I would appreciate your response on this please.

image

Result:
sigma_y 10
1274.4625 689.9193

I have the same issue with Fig S1 B.

code issue

This might be a stupid question. I read the introduction to the code and uncommented a line of ExpData in data.py, but why did I still get an error when I run it?
image
image

How to read the .dat files?

Thanks for sharing this novel approach to study the material properties. I'm a beginner at the neural network and have some stupid questions about how to read the .dat files outputted from nn.py code.

  1. In test.dat and train.dat, why the x values are not the same as the ones in the .csv files?
  2. There are 2 columns of y values, as well as two columns of predicted y values, what's the purpose of having two columns of the same type of data?
  3. In the experiment data, the elastoplastic properties are measured from each test or referred from a tenable value of the same material?

Issue with validation_exp(), validation_mf

Hello professor, thanks for your kindness and your work.
I'm a student following your code work, and I faced some issues while operating mfnn code.
I want to ask you some questions.

  1. When I operate validation_exp(), I obtain two MAPE and STD with two vector form.
    "sigma_y
    [52.81542692 20.47970335] [11.47603269 2.44488563]"
    I'm expecting the 52.815 as MAPE of lowfidelity NN output, and 20.479 as MAPE of total NN output.
    Can you tell me either my expectation is right? if it's not, can you tell me what that features mean?

  2. I have operated 2 cases of validation_exp() - (1) Al6061, (2) B3067, and I obtained significantly different MAPE with Sy against your paper work.

For case(1) Al6061, Paper(Fig. 4. A) suggested about 15% MAPE with NN(2D+3D). But when I try to get MAPE for NN(2D+3D), I get significantly large MAPE of it. Below is the result of my code operation.
"sigma_y
[3683.31674364 102.5687206 ] [3102.81756024 72.03417236]"

For case(2) B3067, Paper(Fig. 6. B) suggested 105.5% MAPE with NN(raw). But when I try to get MAPE for NN(raw), I get different MAPE value as before in case(1). Below is the result of my code operation.
"sigma_y
[52.81542692 20.47970335] [11.47603269 2.44488563]"

I'm confused with the two different attitude of same code: in case(1)-Al6061, it gives me larger MAPE but in case(2)-B3067, it gives me smaller MAPE. I want to know why I'm getting a wrong MAPE value with validation_exp() operation. Was there any training dataset adjustments when you validate those values in paper?

  1. This question is related to Q1. On the bottom of validation_mf() code,
    mape.append(dde.utils.apply(mfnn, (data,))[0])

this is different with that of validation_exp(),
res = dde.utils.apply(mfnn, (data,))
ape.append(res[:2])

First code gives only one MAPE value, but second code gives two MAPE values and it seems first value among two values in second code is equal to the value that is given by first code.
In Fig. 3. D in your paper with High-fidelity Training Data Set Size of 12, MAPE is suggested about 22%, and if I operate code with two different codes, I obtain
22.575627271690337 11.91528043968248 and
[22.18273957 9.71804237] [12.79467543 12.11059052].
If my expectation is right as Q1, the value of MAPE in this case should be 9.718 but paper suggested 22 for the result. If my expectation is right, can you tell me why did you used 22% of MAPE as result of Fig. 3. D?

These are the questions and issues I met with operating your code work. Thanks again with your hard work and giving access to your code work.

Sincerely, Koo.

deepdxde.apply not found

Hello,

I was trying to execute your code, and I'm facing an issue with a certail function call. the deepdxe has been imported as dde, and the function in discussion is dde.apply, now the error that I face is the deepdxde has no attribute 'apply'.

Please help me through.

sincerely,
Surya Teja Koppula
Screenshot 2023-02-04 143503

Code issues

I just downloaded all the data here and when I try to run the code, some errors occurred.
image.
It seems like in the fit_n file, some data files cannot be found. And I still haven't understood which should be the peer file.

High MAPE with DeepXDE 1.1.2 and TensorFlow 2.7.0 – Seeking Assistance

Hello Professor,
I am trying to present the results of Figure S1 A from your paper, specifically focusing on the values of E*.

I am using DeepXDE version 1.1.2 with TensorFlow 2.7.0 as the backend. The data is generated by executing the gen-forward code in model.py.

Is the MAPE value on the Y-axis of Figure S1 directly calculated in the code, or should it be separately computed from the result files?
The average MAPE values are excessively high. Can you suggest any potential issues causing this?
Thank you.

image
image

replicating the results - infinite loop

Hello!
First of all, thank you for publishing the code and the data for this paper.

There is an issue using "nn.py". If I run the code without choosing anything in "main()", it works. But if i uncomment anything in "main()" my PC is caught in an infinite loop with always the same output in the Spyder console, but unfortunately I don't know why. Maybe because the packages have been updated since publishing the code. Do you know the package versions you used? I am very grateful if you could please have a look at the attached screenshots and possibly offer a solution.

In one of the screenshots I marked a line in the error message. Maybe thats part of the problem.

file_runs_if_nothing_uncommented
Infinite_loop
.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.