Giter VIP home page Giter VIP logo

amrex-combustion / pelelmex Goto Github PK

View Code? Open in Web Editor NEW
24.0 10.0 35.0 24.38 MB

An adaptive mesh hydrodynamics simulation code for low Mach number reacting flows without level sub-cycling.

Home Page: https://amrex-combustion.github.io/PeleLMeX/

License: BSD 3-Clause "New" or "Revised" License

Makefile 1.09% Python 3.58% C++ 80.90% C 1.65% Roff 7.66% Shell 1.00% Gnuplot 0.04% CMake 3.53% TeX 0.55%
amrex combustion low-mach-number gpu-acceleration

pelelmex's Introduction

PeleLMeX

AMReX Badge Exascale Computing Project Language: C++17 Citing Archive

CI

Overview

PeleLMeX is a solver for high fidelity reactive flow simulations, namely direct numerical simulation (DNS) and large eddy simulation (LES). The solver combines a low Mach number approach, adaptive mesh refinement (AMR), embedded boundary (EB) geometry treatment and high performance computing (HPC) to provide a flexible tool to address research questions on platforms ranging from small workstations to the world's largest GPU-accelerated supercomputers. PeleLMeX has been used to study complex flame/turbulence interactions in RCCI engines and hydrogen combustion or the effect of sustainable aviation fuel on gas turbine combustion.

PeleLMeX is part of the Pele combustion Suite.

Documentation

Documentation

PeleLMeX is a non-subcycling version of PeleLM based on AMReX's AmrCore and borrowing from the incompressible solver incflo. It solves of the multispecies reactive Navier-Stokes equations in the low Mach number limit as described in the documentation. It inherits most of PeleLM algorithmic features, but differs significantly in its implementation stemming from the non-subcycling approach. PeleLM is no longer under active development; PeleLMeX should be used for simulations of low Mach number reacting flows and PeleC for simulations of flows with higher Mach numbers where compressibility effects are significant.

A overview of PeleLMeX controls is provided in the documentation.

Core Algorithm

The PeleLMeX governing equations and core algorithms are described in:

https://amrex-combustion.github.io/PeleLMeX/manual/html/Model.html#mathematical-background

https://amrex-combustion.github.io/PeleLMeX/manual/html/Model.html#pelelmex-algorithm

Tutorials

A set of self-contained tutorials describing more complex problems is also provided:

https://amrex-combustion.github.io/PeleLMeX/manual/html/Tutorials.html

Installation

Requirements

The compilations of PeleLMeX requires a C++17 compatible compiler (GCC >= 8 or Clang >= 3.6) as well as CMake >= 3.23 for compiling the SUNDIALS third party library.

Most of the examples provided hereafter and in the tutorials will use MPI to run in parallel. Although not mandatory, it is advised to build PeleLMeX with MPI support from the get go if more than a single core is available to you. Any of mpich or open-mpi is a suitable option if MPI is not already available on your platform.

Finally, when building with GPU support, CUDA >= 11 is required with NVIDIA GPUs and ROCm >= 5.2 is required with AMD GPUs.

Download

The preferred method consists of cloning PeleLMeX and its submodules (PelePhysics, amrex, AMReX-Hydro, and SUNDIALS using a recursive git clone:

git clone --recursive --shallow-submodules --single-branch https://github.com/AMReX-Combustion/PeleLMeX.git

The --shallow-submodules and --single-branch flags are recommended for most users as they substantially reduce the size of the download by skipping extraneous parts of the git history. Developers may wish to omit these flags in order download the complete git history of PeleLMeX and its submodules, though standard git commands may also be used after a shallow clone to obtain the skipped portions if needed.

Alternatively, you can use a separate git clone of each of the submodules. The default location for PeleLMeX dependencies is the Submodules folder but you optionally setup the following environment variables (e.g. using bash) to any other location:

export PELE_HOME=<path_to_PeleLMeX>
export AMREX_HYDRO_HOME=${PELE_HOME}/Submodules/AMReX-Hydro
export PELE_PHYSICS_HOME=${PELE_HOME}/Submodules/PelePhysics
export AMREX_HOME=${PELE_PHYSICS_HOME}/Submodules/amrex
export SUNDIALS_HOME=${PELE_PHYSICS_HOME}/Submodules/sundials

Compilation

Both GNUmake and CMake can be used to build PeleLMeX executables. GNUmake is the preferred choice for single executables when running production simulations. While CMake is the preferred method for automatically building and testing most available executables. The code handling the initial condition and boundary conditions is unique to each case, and subfolders in the Exec directory provide a number of examples.

For instance, to compile the executable for the case of a rising hot bubble, move into the HotBubble folder:

cd PeleLMeX/Exec/RegTests/HotBubble

If this is a clean install, you will need to make the third party libraries with: make TPL (note: if on macOS, you might need to specify COMP=llvm in the make statements).

Finally, make with: make -j, or if on macOS: make -j COMP=llvm. To clean the installation, use either make clean or make realclean. If running into compile errors after changing compile time options in PeleLMeX (e.g., the chemical mechanism), the first thing to try is to clean your build by running make TPLrealclean && make realclean, then try to rebuild the third party libraries and PeleLMeX with make TPL && make -j. See the Tutorial for this case for instructions on how to compile with different options (for example, to compile without MPI support or to compile for GPUs) and how to run the code once compiled.

To compile and test using CMake, refer to the example cmake.sh script in the Build directory, or reference the GitHub Actions workflows in the .github/workflows directory.

Getting help, contributing

Do you have a question ? Found an issue ? Please use the GitHub Discussions to engage with the development team or open a new GitHub issue to report a bug. The development team also encourages users to take an active role in respectfully answering each other's questions in these spaces. When reporting a bug, it is helpful to provide as much detail as possible, including a case description and the major compile and runtime options being used. Though not required, it is most effective to create a fork of this repository and share a branch of that fork with a case that minimally reproduces the error.

New contributions to PeleLMeX are welcome ! Contributing Guidelines are provided in CONTRIBUTING.md.

Acknowledgment

This research was supported by the Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations -- the Office of Science and the National Nuclear Security Administration -- responsible for the planning and preparation of a capable exascale ecosystem -- including software, applications, hardware, advanced system engineering, and early testbed platforms -- to support the nation's exascale computing imperative.

Citation

To cite PeleLMeX, please use Citing and the Pele software suite

@article{PeleLMeX_JOSS,
  doi = {10.21105/joss.05450},
  url = {https://doi.org/10.21105/joss.05450},
  year = {2023},
  month = october,
  publisher = {The Open Journal},
  volume = {8},
  number = {90},
  pages = {5450},
  author = {Lucas Esclapez and Marc Day and John Bell and Anne Felden and Candace Gilet and Ray Grout and Marc {Henry de Frahan} and Emmanuel Motheau and Andrew Nonaka and Landon Owen and Bruce Perry and Jon Rood and Nicolas Wimer and Weiqun Zhang},
  journal = {Journal of Open Source Software},
  title= {{PeleLMeX: an AMR Low Mach Number Reactive Flow Simulation Code without level sub-cycling}}
}

@article{PeleSoftware,
  author = {Marc T. {Henry de Frahan} and Lucas Esclapez and Jon Rood and Nicholas T. Wimer and Paul Mullowney and Bruce A. Perry and Landon Owen and Hariswaran Sitaraman and Shashank Yellapantula and Malik Hassanaly and Mohammad J. Rahimi and Michael J. Martin and Olga A. Doronina and Sreejith N. A. and Martin Rieth and Wenjun Ge and Ramanan Sankaran and Ann S. Almgren and Weiqun Zhang and John B. Bell and Ray Grout and Marc S. Day and Jacqueline H. Chen},
  title = {The Pele Simulation Suite for Reacting Flows at Exascale},
  booktitle = {Proceedings of the 2024 SIAM Conference on Parallel Processing for Scientific Computing},
  journal = {Proceedings of the 2024 SIAM Conference on Parallel Processing for Scientific Computing},
  chapter = {},
  pages = {13-25},
  doi = {10.1137/1.9781611977967.2},
  URL = {https://epubs.siam.org/doi/abs/10.1137/1.9781611977967.2},
  eprint = {https://epubs.siam.org/doi/pdf/10.1137/1.9781611977967.2},
  year = {2024},
  publisher = {Proceedings of the 2024 SIAM Conference on Parallel Processing for Scientific Computing}
}

pelelmex's People

Contributors

baperry2 avatar bssoriano avatar cgilet avatar dependabot[bot] avatar esclapez avatar jrood-nrel avatar ldowen avatar marchdf avatar nickwimer avatar olivecha avatar sreejithnrel avatar thomashowarth avatar tom-y-liu avatar vtmaran avatar weiqunzhang avatar wjge avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pelelmex's Issues

Add the diffusion coefficients of each species in the PLT directories.

I want to store the diffusion coefficients of each species like the mass fractions and the production rate for each species because I need them in the PeleAnalysis part to perform calculations.

I tried to see how I_R and Y(species) are stored, but I can't find the arrays where the diffusion coefficients are stored.

Could you help me add the diffusion coefficients in the output?

Thank you in advance.

Default value of active_control.velMax

Upon adding active control to my simulation case, the inlet speed kept going to zero resulting in a flash-back.

I found out by looking at PeleLM.H that m_ctrl_velMax is set to zero by default, resulting in Vnew always being equal to zero:

225     Vnew = std::max(Vnew, 0.0);
227     Vnew = std::min(Vnew, m_ctrl_velMax);

I don't have an issue anymore with using active control as I know I can just supply a value to make my case work, but perhaps something should be changed either in the TripleFlame example:

active_control.velMax = 1.0               # Optional: limit inlet velocity

Or in the code by adding a condition for line 227 of PeleLMFlowControler.cpp (slow ?) or setting an arbitrary large default value in PeleLM.H (not good practice ?).

I would be appy to submit a pull request with the corresponding changes.

Post-processing plt files

I would like to perform post-processing on plt files, but I am having trouble understanding the data storage format within these files.
Specifically, I want to extract the temperature field and calculate the temperature gradient using Python. Is it possible to achieve this?

Reading plt files using self-made python script?

I would like to perform post-processing in Python without going through PeleAnalysis.

Firstly, is this possible? If yes, I would like to know if it's possible to read the files, for instance, to create an animation of the movement of my maximum Heat Release within the domain if we can achieve up to 10 levels of AMR.

Auto-ignition in CounterFlow case with lu_88sk

case_1150K
The numerical setup is shown in the provided picture.
When I set the inlet air to be 1500K, the case will run normally and finally reach a steady state with a diffusion flame.
However, when I change the inlet air to be 1150K (According to http://dx.doi.org/10.1016/j.proci.2016.06.101, it will be a auto-ignition case with Cool flame), PeleLMeX reports an error and returns the following code.

I would like to ask if PeleLMeX can be used to calculate auto-ignition cases. What are the possible meanings and solutions of this error? Thanks for any reply

Host Name: ibnode74
=== If no file names and line numbers are shown below, one can run
addr2line -Cpfie my_exefile my_line_address
to convert my_line_address (e.g., 0x4a6b) into file name and line number.
Or one can use amrex/Tools/Backtrace/parse_bt.py.

=== Please note that the line number reported by addr2line may not be accurate.
One can use
readelf -wl my_exefile | grep my_line_address'
to find out the offset for that line.

0: ./PeleLMeX2d.gnu.MPI.ex() [0xf1c1c6]
amrex::BLBackTrace::print_backtrace_info(_IO_FILE*)
/share/home/zhengjian/Pele/amrex/Src/Base/AMReX_BLBackTrace.cpp:199:36

1: ./PeleLMeX2d.gnu.MPI.ex() [0xf1daa6]
amrex::BLBackTrace::handler(int)
/share/home/zhengjian/Pele/amrex/Src/Base/AMReX_BLBackTrace.cpp:99:15

2: ./PeleLMeX2d.gnu.MPI.ex() [0x5cc38e]
amrex::Abort(char const*) inlined at /share/home/zhengjian/Pele/PeleLMeX/Source/PeleLMDiffusion.cpp:1148:18 in PeleLM::differentialDiffusionUpdate(std::unique_ptr<PeleLM::AdvanceAdvData, std::default_deletePeleLM::AdvanceAdvData >&, std::unique_ptr<PeleLM::AdvanceDiffData, std::default_deletePeleLM::AdvanceDiffData >&)
/share/home/zhengjian/Pele/amrex/Src/Base/AMReX.H:159:19
PeleLM::differentialDiffusionUpdate(std::unique_ptr<PeleLM::AdvanceAdvData, std::default_deletePeleLM::AdvanceAdvData >&, std::unique_ptr<PeleLM::AdvanceDiffData, std::default_deletePeleLM::AdvanceDiffData >&)
/share/home/zhengjian/Pele/PeleLMeX/Source/PeleLMDiffusion.cpp:1148:18

3: ./PeleLMeX2d.gnu.MPI.ex() [0x57be7c]
PeleLM::oneSDC(int, std::unique_ptr<PeleLM::AdvanceAdvData, std::default_deletePeleLM::AdvanceAdvData >&, std::unique_ptr<PeleLM::AdvanceDiffData, std::default_deletePeleLM::AdvanceDiffData >&)
/share/home/zhengjian/Pele/PeleLMeX/Source/PeleLMAdvance.cpp:362:4

4: ./PeleLMeX2d.gnu.MPI.ex() [0x57ed07]
PeleLM::Advance(int)
/share/home/zhengjian/Pele/PeleLMeX/Source/PeleLMAdvance.cpp:172:7

5: ./PeleLMeX2d.gnu.MPI.ex() [0x5939aa]
PeleLM::Evolve()
/share/home/zhengjian/Pele/PeleLMeX/Source/PeleLMEvolve.cpp:43:18

6: ./PeleLMeX2d.gnu.MPI.ex() [0x429a3f]
main
/share/home/zhengjian/Pele/PeleLMeX/Source/main.cpp:57:51

7: /lib64/libc.so.6(__libc_start_main+0xf5) [0x2b630057f555]

8: ./PeleLMeX2d.gnu.MPI.ex() [0x4341ad]
_start at ??:?

Multiple processors issue

Hello everyone,

I'm new to using PeleLMeX and I'm trying to simulate the flamesheet model using multiple processors. However, I've noticed that some of the files generated during the simulation have names like 'plt00000.old.0782200/', which is causing issues when I try to visualize the results using visIt.

Does anyone have any advice or best practices for avoiding these kinds of issues when working with multiple processors? Any guidance you can offer would be greatly appreciated.

Thank you!

Unnecessary code after mac FillPatchTwoLevels?

In chasing down something in amr-wind (a non-subcycling code), I was looking at all the stuff happening in create_constrained_umac_grown. Specifically this line and below: https://github.com/AMReX-Combustion/PeleLMeX/blob/development/Source/PeleLMeX_UMac.cpp#L270. This stuff is present in IAMR (subcycling) as well (in some similar form): https://github.com/AMReX-Fluids/IAMR/blame/development/Source/NavierStokesBase.cpp#L1159. It is not present in incflo (non subcycling).

In chatting with @cgilet, it looks like this stuff is needed for a subcycling code but not a non-subcycling code. Therefore this stuff is probably not needed for PeleLMeX? Is this a relic of transforming from LM (subcycling) to LMeX (non-subcycling)? Do @nickwimer, @drummerdoc, @esclapez have thoughts on this?

Compiling SUNDIALS Library on New Cluster without GitHub Access

Hello,

I am reaching out to you because I have just switched to another cluster to run my PeleLMeX calculations. I have gathered all the files and made sure I have everything needed for compilation. To do this, I started by running 'make TPLrealclean' and then 'make TPL.'

After executing 'make TPL,' the intention is to create the SUNDIALS library from GitHub. The issue I am encountering with the new cluster is that GitHub is not accessible! As a result, the compilation of 'make TPL' fails, and I receive the following message:

$ make TPL
Loading /ccc/work/cont003/gen7456/bazharza/PeleLMeX/Submodules/amrex/Tools/GNUMake/comps/gnu.mak...
Loading /ccc/work/cont003/gen7456/bazharza/PeleLMeX/Submodules/amrex/Tools/GNUMake/sites/Make.unknown...
Loading /ccc/work/cont003/gen7456/bazharza/PeleLMeX/Submodules/amrex/Tools/GNUMake/packages/Make.sundials...
GNUmakefile:265: warning: overriding recipe for target '/'
GNUmakefile:240: warning: ignoring old recipe for target '/'
==> Building SUNDIALS library
make[1]: Entering directory '/ccc/work/cont003/gen7456/bazharza/PeleLMeX/Submodules/PelePhysics/ThirdParty'
Loading /ccc/work/cont003/gen7456/bazharza/PeleLMeX/Submodules/amrex/Tools/GNUMake/comps/gnu.mak...
Loading /ccc/work/cont003/gen7456/bazharza/PeleLMeX/Submodules/amrex/Tools/GNUMake/sites/Make.unknown...
GNUmakefile:265: warning: overriding recipe for target '/'
GNUmakefile:240: warning: ignoring old recipe for target '/'
cp: cannot stat '/cmb138/software/v6.5.0.tar.gz': No such file or directory
--2024-01-17 17:22:36-- https://github.com/LLNL/sundials/archive/refs/tags/v6.5.0.tar.gz
Resolving github.com (github.com)... 140.82.121.3
Connecting to github.com (github.com)|140.82.121.3|:443... failed: Connection timed out.
Retrying.

Could you please advise on how to overcome this compilation step and create the SUNDIALS library without using GitHub?

Thank you in advance.

Compilation in a cluster using a new mechanism model

I am facing a compilation issue within a cluster, specifically related to the creation of the executable file.

I created a new mechanism within the 'Mechanism' section of PelePhysics. This mechanism works well on my local desktop for simplified cases, allowing me to obtain the .exe file and run computations successfully.

However, when I attempt to compile it on the cluster, I am unable to generate the .exe file, and I receive the following error message: /PeleLMeX-1/Submodules/PelePhysics/ThirdParty/INSTALL/gnu/lib/libsundials_cvode.so: undefined reference to `pow@GLIBC_2.29'
collect2: error: ld returned 1 exit status
make: *** [/PeleLMeX-1/Submodules/amrex/Tools/GNUMake/Make.rules:56: PeleLMeX2d.gnu.ex] Error 1

Interestingly, when I switch to a different mechanism in the GNUmakefile, the compilation process continues until it reaches the linking stage and generates the .exe file.

Do you have any suggestions on how to avoid such issues using the new mechanism, particularly when working with a cluster?

Reading 2D temperature field

Can the PeleLMeX be configured to incorporate a 2D temperature field from a .dat file for the Flamesheet case?

AMR criteria using HeatRelease

Hi all,

I am looking to use HeatRelease as a quantity to refine the grid on, and looking through the tutorials, it looks like the swirl case does exactly this. However, when I try to do this, I get an error

amrex::Error::42::PeleLM::taggingSetup(): unknown variable field for criteria HR

Is HeatRelease a supported tagging field? If not, is this something you want to include in the future?

Thanks!

Hanging during initialisation with EB

This bug is very specific, but wondering if anyone has any thoughts on it, or have seen anything similar.

When I run on some systems (this error is not consistent across different HPC systems) with EB, the code hangs on initialisation while resetting the covered masks when I run with no AMR. I can trick it out of this by setting amr.max_level=1 and peleLM.refine_EB_max_level = 0. I don't think this would effect code performance too much since it presumably would just have an empty second entry in the vectors of MultiFabs, but it seems like a strange thing to happen. It also seems very strange that it does not happen consistently across different systems.

EB_FlamePastCylinder

@esclapez I made a case that EB_FlamePastCylinder in PeleLMeX (because I want to use the isothermal boundary of EB), the calculation is very fast only flow, but adding the reaction, the calculation becomes very slow. This is my input file:
input.txt

PremBunsen3D crashes

The defaults in these cases should be changed so they work out of the box.

Discussed in #309

Originally posted by sjlienge November 15, 2023
Hello,

The PremBunsen3D case crashes after printing the Build infos with SIGABRT. The same also applies for the PremBunsen2D. Is it known what the issue is? The source files seem alright to me. I would like to use the case to understand the setup of burner flames and setup a calculation of the Cambridge burner.

Thank you in advance already!

pelelm.incompressible flag segfaults

Hi,
If I run with the incompressible flag switched on, the code segfaults. I tracked it down to the line in the Utils that tries to get the reactions MF without a flag check (e.g. m_do_react or m_incompressible, referenced below). I could fix it and put in a pull request but figured someone could add this quick fix without needing a full pull request.

std::unique_ptr<MultiFab> reactmf = fillPatchReact(lev, a_time, nGrow);

Thomas

Wall boundary conditions (e.g. NoSlipWallIsothermal) do lead to wrong behaviour using the soret effect.

When the soret effect is turned on (simple, fuego), we can set up a test case, where species are "produced" at the wall due to zero gradient boundary condition. For soret, a zero gradient boundary conidition for the species is actually not correct, since the fluxes due to thermal diffusion have to be balanced by fluxes in the species profiles, leading to non zero gradient species profiles at the walls. I created a testcase (pipe with two walls on the top and the bottom with perodic bcs on the left and right) and set a temperature gradient from 300K to 700K from the bottom of the domain to the top (top wall temperature = 700K, bottom wall temperatur = 300K) and initialized the species profiles with a mixture of air and H2 (distributed equally over the domain). Over time the mass fraction of H2 increases and the mass fluxes of O2 and N2 decrease, which is physically not correct.
If you have any questions, I can also share the test case and we can discuss on this!

Thanks

AMReX MLMG bottom solve change causes PeleLMeX EB_PipeFlow test to fail

After AMReX-Codes/amrex#3991 (stemming from Exawind/amr-wind#886) the PeleLMeX EB_PipeFlow test is failing on the nodal projection on the first timestep: https://github.com/baperry2/PeleLMeX/actions/runs/9784946279/job/27016932023.

Despite the CI failing on this test, I've tried with a few different machines and compilers and haven't been able to reproduce the failure locally. However, turning up verbosity for the nodal projection when running locally, I see that since the change the case takes more iterations for the nodal projection to converge (10 vs 8) and there are more frequent bottom solve failures.

@marchdf @WeiqunZhang @asalmgren @jrood-nrel it looks like you all worked the issue in amr-wind. Any idea what might be happening here in LMeX?

OMP problem?

Hi everyone,

I'm trying to solve the FlameSheet problem, which is a RegTest within PeleLMeX. I have a problem that when I apply the OMP option of GNUmakefile, the problem does not converge and hangs with errors. I don't have any problems when using MPI only. Is there a problem with the OMP option?

The compilation options are as follows

Compilation

COMP = gnu
USE_MPI = TRUE
USE_OMP = TRUE
USE_CUDA = FALSE
USE_HIP = FALSE

Hydrostatic condition for open domain

Hi all,

I am looking to run a buoyancy-driven flow that is open to atmospheric conditions. However, when running a simulation with boundary conditions specified as "Outflow" normal to the direction of gravity, the hydrostatic condition induced by the gravitational field disappears.

To show this most clearly, starting from the HotBubble tutorial case, the following conditions

geometry.is_periodic = 1 0               # For each dir, 0: non-perio, 1: periodic
peleLM.lo_bc = Interior NoSlipWallAdiab
peleLM.hi_bc = Interior Outflow

produced an avg_pressure field (using yt)

as expected for the first snapshot with a hydrostatic condition.

Using

geometry.is_periodic = 0 0               # For each dir, 0: non-perio, 1: periodic
peleLM.lo_bc = NoSlipWallAdiab NoSlipWallAdiab
peleLM.hi_bc = NoSlipWallAdiab Outflow

produced

Again, this is expected.

And finally, using

geometry.is_periodic = 0 0               # For each dir, 0: non-perio, 1: periodic
peleLM.lo_bc = Outflow NoSlipWallAdiab
peleLM.hi_bc = Outflow Outflow

produced

This last image I would have expected to be quite similar to the other two. Is there a physical justification for this behavior or a bug? If the former, is there a way to introduce the hydrostatic condition?

Thank you very much!!

Developments shopping list

  • Refactor problem setup
  • Enable inflow on EBs
  • Add on-the-fly diagnostics: PDF, conditional mean, iso-surface, ...
  • Extend CI
  • Extend V&V
  • Add more tutorials
  • Machinery to control dt when adding levels
  • Add option to prune covered boxes when EB
  • Time averaged data

Adaptive mesh refinement in specific region of computational domain

Hi all,

Is there any interest in combining the static and adaptive mesh refinement approaches? I am finding many instances where I would like to adaptively refine only a specific region of the flow with a certain criteria. It would look something like:

amr.gradrho.max_level = 2
amr.gradrho.adjacent_difference_greater = 0.01
amr.gradrho.field_name = density
amr.gradrho.in_box_lo = -1.0 -1.0 0.0
amr.gradrho.in_box_hi =  1.0  1.0 2.5

This would only tag density differences within the specified in_box region.

Thanks!
Mike

Non-unity Lewis number

Hello,
Regarding the Lewis number. I would like to impose a constant number, for example, Le = 0.8.
I tried to consult the documentation, but I couldn't find an explanation on this matter.
Is it possible to set a non-unity Lewis number?

Rough walls

In PeleLM/PeleLMex, can the walls in the figure be modeled and meshed? Or can this wall be embedded?
This wall is built based on the expression: f=cos(2pai x)*cos(2pai z), pai =3.14.
1a72b8f9b83a95b7dee309682159715

Failure of tutorial PremBunsen2D

Hi all,

It seems like the PremBunsen2D tutorial case fails to run with the given inputs. On the first time step, an massive streamwise velocity is estimated. Attached is part of the output. Any idea what is going on here? Thanks!

Mike

SUNDIALS initialized.
MPI initialized with 112 MPI processes
MPI initialized with thread support level 0
AMReX (22.05-27-g5d88558d2aef) initialized
Successfully read inputs file ... 

 ================= Build infos =================
 PeleLMeX    git hash: v21.10-118-g23397a2-dirty
 AMReX       git hash: 22.05-27-g5d88558d2
 PelePhysics git hash: v0.1-1007-gb0d3f35b
 AMReX-Hydro git hash: 343e20e
 ===============================================

==============================================================================
 State components 
==============================================================================
 Velocity X: 0, Velocity Y: 1 
 Density: 2
 First species: 3
 Enthalpy: 24
 Temperature: 25
 thermo. pressure: 26
 => Total number of state variables: 27
==============================================================================

 Initializing data for 2 levels 
 Initialization of Transport ... 
 Initialization of chemical reactor ... 
Creating ReactorBase instance: ReactorCvode
Initializing CVODE:
  Using atomic reductions
 Mixture fraction definition lacks fuelID: consider using peleLM.fuel_name keyword 
25 variables found in PMF file
580 data lines found in PMF file
 Making new level 0 from scratch
 Making new level 1 from scratch
 Resetting fine-covered cells mask 
==============================================================================
Typical values: 
	Velocity: 0.1 3.923852814e+179 
	Density: 0.6650455194
	Temp:    1260.890134
	H:       -218228.8977
	Y_H2: 0.0003458573634
	Y_H: 3.14226256e-05
	Y_O: 0.000241012551
	Y_O2: 0.2074301803
	Y_OH: 0.0006121461777
	Y_H2O: 0.01451381058
	Y_HO2: 5.371213803e-05
	Y_CH2: 6.546128626e-06
	Y_CH2(S): 4.950200534e-07
	Y_CH3: 0.0002058628013
	Y_CH4: 0.04688699785
	Y_CO: 0.006938410704
	Y_CO2: 0.01541563049
	Y_HCO: 9.539561314e-06
	Y_CH2O: 0.0002204527497
	Y_CH3O: 1.360707325e-05
	Y_C2H4: 0.0002558423542
	Y_C2H5: 9.233090566e-06
	Y_C2H6: 0.0003360756718
	Y_N2: 0.762207865
	Y_AR: -2.030418186e-127
==============================================================================
 Est. time step - Conv: 1.556568412e-184, divu: 1e+20
 Initial dt: 1.556568412e-187
 Initial velocity projection:   U: 0.1  V: 3.923852814e+179
 >> After initial velocity projection:   U: 0  V: 0
 Est. time step - Conv: 3e+199, divu: 1.831467954e-05
 Initial dt: 1.831467954e-08
 Initial velocity projection:   U: 0  V: 0
 >> After initial velocity projection:   U: 0  V: 0
 Doing initial pressure iteration(s) 

 ================   INITIAL ITERATION [0]   ================ 
 Est. time step - Conv: 3e+199, divu: 7.105853017e-173
 STEP [-1] - Time: 0, dt 7.105853017e-176
   SDC iter [1] 
   - oneSDC()::MACProjection()   --> Time: 0.009342319332
   - oneSDC()::ScalarAdvection() --> Time: 0.002244584262
   - oneSDC()::ScalarDiffusion() --> Time: 0.004823582247
From CVODE: At t = 0 and h = 6.01889e-195, the corrector convergence test failed repeatedly or with |h| = hmin.```
All other processors fail as well.

peleLM.isothermal_EB = 1 causes CUDA error 700 - illegal memory access was encountered !!!

Hello, I am able to run a low Mach case with a fairly complex geometry with EB on CPUs. However, the same setup doesn't run on GPUs. It complains of illegal memory access. My case has isothermal walls.
After ruling out all possibilities, I have narrowed down the issue: switching the flag to peleLM.isothermal_EB = 0 runs seamlessly, but setting it to 1 results in the above mentioned error.

The Backtrace file points to PeleLMUtils.cpp (line 333 - function - intFluxDivergenceLevelEB) where this issue may have arisen first.

Is there a fix for this? Looking forward to some help on this.

Thanks!

Taaresh

TODO list

  • typical values (LE)
  • multi-component linear solves (LE)
  • MFbased interpolator (LE)
  • Fix level data layout (LE)
  • New MFI iterators
  • extended CI
  • convergence testing: advection, diffusion, ADR (LE)
  • regression tests
  • Hypre interface to linear solvers (LE)
  • turbulence inflow (LE)
  • hook to dump_and_stop available in AmrLevel
  • use TPROF on GPU to identify any issues, have scaling data (LE)
  • mass balance in closed chamber dp0dt (NW)
  • mixture fraction
  • control max level EB refinement (LE)
  • control dt when adding level
  • cylindrical coordinates support
  • restore incompressible option
  • Add option to prune covered boxes when EB
  • Time averaged data

Error on building: `fatal error: sundials/sundials_context.h: No such file or directory`

Recursive git clone was used to perform installing. At folder Exec/RegTests/EB_FlowPastCylinder, make gives

Loading ../../../Submodules/amrex/Tools/GNUMake/comps/gnu.mak...
Loading ../../../Submodules/amrex/Tools/GNUMake/sites/Make.unknown...
Loading ../../../Submodules/amrex/Tools/GNUMake/packages/Make.sundials...
GNUmakefile:265: warning: overriding recipe for target '/'
GNUmakefile:240: warning: ignoring old recipe for target '/'
Generating AMReX_Config.H ...
Generating AMReX_Version.H ...
Compiling AMReX_NVector_MultiFab.cpp ...
In file included from ../../../Submodules/amrex/Src/Extern/SUNDIALS/AMReX_NVector_MultiFab.H:19,
                 from ../../../Submodules/amrex/Src/Extern/SUNDIALS/AMReX_NVector_MultiFab.cpp:12:
../../../Submodules/amrex/Src/Extern/SUNDIALS/AMReX_Sundials_Core.H:6:10: fatal error: sundials/sundials_context.h: No such file or directory
    6 | #include <sundials/sundials_context.h>
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make: *** [../../../Submodules/amrex/Tools/GNUMake/Make.rules:262: tmp_build_dir/o/2d.gnu.MPI.EXE/AMReX_NVector_MultiFab.o] Error 1

amrex::Abort::0::MLMGBndry::setBoxBC: Unknown LinOpBCType !!! SIGABRT

Hello. I am trying to replicate the "Non-reacting flow past a cylinder" tutorial provided by amrex-combustion under PeleLMeX. I did everything as specified in the tutorial, the only change that I made was turning off the MPI, by doing USE_MPI = FALSE in GNUmakefile.
When I try to run the case by giving the command "./PeleLMeX2d.gnu.ex input.2d-Re500", it gives this error -

Doing initial projection(s)

amrex::Abort::0::MLMGBndry::setBoxBC: Unknown LinOpBCType !!!
SIGABRT
See Backtrace.0 file for details

Can you please tell me where am I going wrong?

Thank you for your time.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.