Giter VIP home page Giter VIP logo

cics-nd / rans-uncertainty Goto Github PK

View Code? Open in Web Editor NEW
53.0 11.0 30.0 356.24 MB

Uncertainty Quantification of RANS Data-Driven Turbulence Modeling

Home Page: https://doi.org/10.1016/j.jcp.2019.01.021

License: MIT License

Python 7.15% GLSL 18.70% Shell 0.06% C 2.27% C++ 71.76% Objective-C 0.06%
bayesian-surrogate rans turbulence-models stein-variational-gradient-decent deep-learning fluid-dynamics

rans-uncertainty's People

Contributors

absolutestratos avatar nickgeneva avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rans-uncertainty's Issues

Cannot convert mesh from .geo to .msh

Cannot convert mesh from .geo to .msh. The problem is that it is necessary to specify
-format msh2
in meshes/convert-mesh.sh

./gmsh ${MESH_NAME}.geo -3 -o ${MESH_NAME}.msh -algo frontal -algo front3d -format msh2

expectation and variance of beta

@NickGeneva
I have a small question about the expectation and variance of beta. It is Gamma distributed with shape=100, rate=2e-4.

In the paper, you wrote "This weakly promotes large beta with an expected value of 5e5 and a variance on the order of 1e-3". However, based on the equation here, the expectation should be shape / rate = 5e5, and the variance should be shape / (rate * rate) = 2.5e9. Do I misunderstand something ?

How to specify <LOCATION OF CAFFE2 LIBS>

Could you please describe how to specify LOCATION OF CAFFE2 LIBS in
/sdd-rans/TurbulenceModels/turbulenceModels/reynoldsNet/Make/options

I am getting an error when compiling libraries:

In file included from reynoldsNet.C:24:0:
reynoldsNet.H:41:10: fatal error: caffe2/core/common.h: No such file or directory
#include "caffe2/core/common.h"

Thank you.

Regards,
Arsen

Important dependency for building the SDD-RANS libraries

It is necessary to install protobuf 3.6.1 (newer and older versions will not work)

tar -xf protobuf-all-3.6.1.tar.gz
cd protobuf-3.6.1
./configure
make
make check
sudo make install
cd ./python
python3 setup.py build
python3 setup.py test
sudo python3 setup.py install

Example of solver for LES

Could you please upload openfoam directory (constant and system folders) for LES simulations (similar to ones in training-data//openfoam)?

So it will be understandable how to perform LES.

Thank you.

Regards
Arsen

Log likelihood computation in the code

Hi,

I have two questions about the way the log-likelihood is computed in the code.

  1. For mini-batch training, the log-likelihood is multiplied by a factor of N/B, where N is the total number of training samples and B is the number of samples in the mini-batch. What is the rationale behind this? I know this follows from Ref. [1] but wouldn't this erroneously give more weight to the likelihood function as we decrease the batch size since the prior probability does not change. This would then be akin to maximizing the likelihood rather than the full aposteriori distribution. What am I missing here?

  2. The code is used to predict the Reynolds-stress anisotropy tensor which has 6 unique components. Assuming each component is iid Normal distributed, the joint distribution would be something like:

P(b| w, beta) ~ N(b|w, beta) ~ N(b1|w, beta)*N(b2|w, beta).....*N(b6|w, beta)

where b1,..,b6 are unique components of b.

This is the distribution for one sample. For N samples we need to do a similar product N times.

Now if we look at the code, the likelihood computation reads

log_likelihood = len(self.trainingLoader.dataset) / output.size(0) \
                                * (-0.5 * self.models[index].log_beta.exp()
                                * (target - output).pow(2).sum()
                                + 0.5 * target.numel()
                                * self.models[index].log_beta)

This does not take into account that there are 6 components of b. A correct likelihood should instead be

log_likelihood = len(self.trainingLoader.dataset) / output.size(0) \
                                * (-0.5 * self.models[index].log_beta.exp()
                                * (target - output).pow(2).sum()
                                + 0.5 * (target.numel()+6)
                                * self.models[index].log_beta)

where we need to add 6 to the target.numel() term. This constant could have significant effect especially for small mini-batch size.

[1] Y. Zhu, N. Zabaras. Bayesian deep convolutional encoder–decoder networks for surrogate modeling and uncertainty quantification, Journal of Computational Physics, 2018.

Regards
Deepak

Log gamma output noise prior

Hi,

I was looking at the code for the log of the gamma output noise prior as computed in the compute_loss function in foamSVGD.py. Currently the code reads,

prior_log_beta = (self.prior_beta_shape * self.models[index].log_beta \
                        - self.models[index].log_beta.exp() * self.prior_beta_rate).sum()

I believe the correct form should be

prior_log_beta = ((self.prior_beta_shape-1.0) * self.models[index].log_beta \
                        - self.models[index].log_beta.exp() * self.prior_beta_rate).sum()

Is this correct or am I missing something?

Regards

Initialisation of weights of the neural networks

Hi,

I think the weights of the neural network should be initialised by sampling from the prior distribution. In the current implementation, they are samples from a standard normal distribution. I think this will help in convergence.

Regards
Deepak

mathematical derivation of log likelihood and weight prior corresponding to code

@NickGeneva Thank you for sharing the code.

I am still very new to BNNs. When trying to understand the loss function, I could not match these code with the equations in the paper.

  • For the likelihood, the code is
    # Log Gaussian likelihood
    # See Eq. 18-19 in paper
    log_likelihood = len(self.trainingLoader.dataset) / output.size(0) \
    * (-0.5 * self.models[index].log_beta.exp()
    * (target - output).pow(2).sum()
    + 0.5 * target.numel()
    * self.models[index].log_beta)

    The mathematics should be (my understanding)
    Snipaste_2021-01-28_23-39-38

For the weight prior, the code is

# Log Gaussian weight prior
# See Eq. 17 in paper
prior_ws = Variable(th.Tensor([0]).type(dtype))
for param in self.models[index].parameters():
prior_ws += th.log1p(0.5 / self.prior_w_rate * param.pow(2)).sum()
prior_ws *= -(self.prior_w_shape + 0.5)

Haven't figured out why Eq.(17) should be calculated in this way ?

Could you do a more specific explanation or mathematical derivation to help understand the above code, please ?

Best,

What version of caffe2 do you use?

It looks like like old function do not work in pytorch version of caffe2. When compiling reynoldsNet I am getting following errors:

reynoldsNet.C: In member function ‘std::vector caffe2::reynoldsNet::forward(std::vector) const’:
reynoldsNet.C:87:52: error: no matching function for call to ‘caffe2::Tensor::Tensor(, std::vector&, caffe2::CPUContext*)’
auto inputTensor = Tensor({1, 5}, inputdata, &ctx);

/home/aiskhak/anaconda3/pkgs/pytorch-nightly-cpu-1.2.0.dev20190731+cpu-py3.7_cpu_0/lib/python3.7/site-packages/torch/include/caffe2/core/tensor.h:29:3: note: candidate expects 2 arguments, 3 provided
reynoldsNet.C:101:62: error: use of deleted function ‘caffe2::Tensor::Tensor(const caffe2::Tensor&)’
auto outBlob = workSpace.GetBlob("output")->Get();

Could you please help me resolve these issues?

Regards,
Arsen

Float index is not allowed

Hi,

Just a short notice: dataManager.py, Line 72:
mydict.update({'idxmask': np.array([])}) makes idmask float which is not allowed to be used as an array index. Should be:
mydict.update({'idxmask': np.array([], dtype=np.int32)})

Regards,
Arsen

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.