cics-nd / rans-uncertainty Goto Github PK
View Code? Open in Web Editor NEWUncertainty Quantification of RANS Data-Driven Turbulence Modeling
Home Page: https://doi.org/10.1016/j.jcp.2019.01.021
License: MIT License
Uncertainty Quantification of RANS Data-Driven Turbulence Modeling
Home Page: https://doi.org/10.1016/j.jcp.2019.01.021
License: MIT License
Dear Geneva,
I have succeeded in compiling reynoldsNet library. Now, according to your tutorial, if I want to compile simpleNN solver, I should first compile linearViscousStress. But I noticed that there lacked a "Make" folder in "linearViscousStress". Do I need to add "make" folder to it?
Cannot convert mesh from .geo to .msh. The problem is that it is necessary to specify
-format msh2
in meshes/convert-mesh.sh
./gmsh ${MESH_NAME}.geo -3 -o ${MESH_NAME}.msh -algo frontal -algo front3d -format msh2
@NickGeneva
I have a small question about the expectation and variance of beta. It is Gamma distributed with shape=100
, rate=2e-4
.
In the paper, you wrote "This weakly promotes large beta with an expected value of 5e5 and a variance on the order of 1e-3". However, based on the equation here, the expectation should be shape / rate = 5e5
, and the variance should be shape / (rate * rate) = 2.5e9
. Do I misunderstand something ?
Could you please describe how to specify LOCATION OF CAFFE2 LIBS in
/sdd-rans/TurbulenceModels/turbulenceModels/reynoldsNet/Make/options
I am getting an error when compiling libraries:
In file included from reynoldsNet.C:24:0:
reynoldsNet.H:41:10: fatal error: caffe2/core/common.h: No such file or directory
#include "caffe2/core/common.h"
Thank you.
Regards,
Arsen
It is necessary to install protobuf 3.6.1 (newer and older versions will not work)
tar -xf protobuf-all-3.6.1.tar.gz
cd protobuf-3.6.1
./configure
make
make check
sudo make install
cd ./python
python3 setup.py build
python3 setup.py test
sudo python3 setup.py install
When I compiled simpleNN solver. It appearred an error: undefined reference to `c10::hiptensorid()'. It seemed that I didn't bulid a link between caffe2 and openfoam. Did anyone meet similar problems? Could you tell me how to solve it ?
Could you please upload openfoam directory (constant and system folders) for LES simulations (similar to ones in training-data//openfoam)?
So it will be understandable how to perform LES.
Thank you.
Regards
Arsen
Hi,
I have two questions about the way the log-likelihood is computed in the code.
For mini-batch training, the log-likelihood is multiplied by a factor of N/B, where N is the total number of training samples and B is the number of samples in the mini-batch. What is the rationale behind this? I know this follows from Ref. [1] but wouldn't this erroneously give more weight to the likelihood function as we decrease the batch size since the prior probability does not change. This would then be akin to maximizing the likelihood rather than the full aposteriori distribution. What am I missing here?
The code is used to predict the Reynolds-stress anisotropy tensor which has 6 unique components. Assuming each component is iid Normal distributed, the joint distribution would be something like:
P(b| w, beta) ~ N(b|w, beta) ~ N(b1|w, beta)*N(b2|w, beta).....*N(b6|w, beta)
where b1,..,b6 are unique components of b.
This is the distribution for one sample. For N samples we need to do a similar product N times.
Now if we look at the code, the likelihood computation reads
log_likelihood = len(self.trainingLoader.dataset) / output.size(0) \
* (-0.5 * self.models[index].log_beta.exp()
* (target - output).pow(2).sum()
+ 0.5 * target.numel()
* self.models[index].log_beta)
This does not take into account that there are 6 components of b. A correct likelihood should instead be
log_likelihood = len(self.trainingLoader.dataset) / output.size(0) \
* (-0.5 * self.models[index].log_beta.exp()
* (target - output).pow(2).sum()
+ 0.5 * (target.numel()+6)
* self.models[index].log_beta)
where we need to add 6 to the target.numel() term. This constant could have significant effect especially for small mini-batch size.
[1] Y. Zhu, N. Zabaras. Bayesian deep convolutional encoder–decoder networks for surrogate modeling and uncertainty quantification, Journal of Computational Physics, 2018.
Regards
Deepak
Viscosity in training-data/square-cylinder/openfoam/constant/transportProperties
should be 0.0002
instead of 0.00002
Regards
Arsen
Hi,
I was looking at the code for the log of the gamma output noise prior as computed in the compute_loss function in foamSVGD.py. Currently the code reads,
prior_log_beta = (self.prior_beta_shape * self.models[index].log_beta \
- self.models[index].log_beta.exp() * self.prior_beta_rate).sum()
I believe the correct form should be
prior_log_beta = ((self.prior_beta_shape-1.0) * self.models[index].log_beta \
- self.models[index].log_beta.exp() * self.prior_beta_rate).sum()
Is this correct or am I missing something?
Regards
There is no .geo mesh for square-cylinder case for RANS and LES.
Could you please upload them?
Regards,
Arsen
Hi,
I think the weights of the neural network should be initialised by sampling from the prior distribution. In the current implementation, they are samples from a standard normal distribution. I think this will help in convergence.
Regards
Deepak
@NickGeneva Thank you for sharing the code.
I am still very new to BNNs. When trying to understand the loss function, I could not match these code with the equations in the paper.
rans-uncertainty/invar-nn/nn/foamSVGD.py
Lines 138 to 144 in 7d99fd9
For the weight prior, the code is
rans-uncertainty/invar-nn/nn/foamSVGD.py
Lines 146 to 151 in 7d99fd9
Could you do a more specific explanation or mathematical derivation to help understand the above code, please ?
Best,
It looks like like old function do not work in pytorch version of caffe2. When compiling reynoldsNet I am getting following errors:
reynoldsNet.C: In member function ‘std::vector caffe2::reynoldsNet::forward(std::vector) const’:
reynoldsNet.C:87:52: error: no matching function for call to ‘caffe2::Tensor::Tensor(, std::vector&, caffe2::CPUContext*)’
auto inputTensor = Tensor({1, 5}, inputdata, &ctx);
/home/aiskhak/anaconda3/pkgs/pytorch-nightly-cpu-1.2.0.dev20190731+cpu-py3.7_cpu_0/lib/python3.7/site-packages/torch/include/caffe2/core/tensor.h:29:3: note: candidate expects 2 arguments, 3 provided
reynoldsNet.C:101:62: error: use of deleted function ‘caffe2::Tensor::Tensor(const caffe2::Tensor&)’
auto outBlob = workSpace.GetBlob("output")->Get();
Could you please help me resolve these issues?
Regards,
Arsen
Hi,
Just a short notice: dataManager.py, Line 72:
mydict.update({'idxmask': np.array([])}) makes idmask float which is not allowed to be used as an array index. Should be:
mydict.update({'idxmask': np.array([], dtype=np.int32)})
Regards,
Arsen
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.