Giter VIP home page Giter VIP logo

adflow's Introduction

ADflow

Build Status Documentation Status codecov

ADflow is a flow solver developed by the MDO Lab at the University of Michigan. It solves the compressible Euler, laminar Navier–Stokes and Reynolds-averaged Navier–Stokes equations using structured multi-block and overset meshes. ADflow's features include the following:

  • Discrete adjoint implementation
  • "Complexified" code for complex-step derivative verification
  • Massively parallel (both CPU and memory scalable) implementation using MPI

ADflow has been used in aerodynamic, aerostructural, and aeropropulsive design optimization of aircraft configurations. Furthermore, we used ADflow to perform design optimization of hydrofoils and wind turbines.

Documentation

Please see the documentation for installation details and API documentation.

To locally build the documentation, enter the doc folder and enter make html in terminal. You can then view the built documentation in the _build folder.

Citing ADflow

If you use ADflow, please see this page for citation information.

License

Copyright 2019 MDO Lab

Distributed using the GNU Lesser General Public License (LGPL), verstion 2.1; see the LICENSE file for details.

adflow's People

Contributors

a-cgray avatar akleb avatar anilyil avatar arshsaja avatar asitav avatar awccopp avatar bbrelje avatar bernardopacini avatar camader avatar daburdette avatar davidanderegg avatar denera avatar economon avatar eirikurj avatar ewu63 avatar eytanadler avatar gawng avatar gkenway avatar jrram avatar justinsgray avatar lambe avatar lamkina avatar lvzhoujie avatar marcomangano avatar nbons avatar neysecco avatar shamsheersc19 avatar sichenghe avatar sseraj avatar yqliaohk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

adflow's Issues

Using blockettes with non-default simulations should raise an error and quit.

Description of feature

Blockettes only work with steady state RANS simulations with the SA turbulence model on non-rotating reference frames. Any combination of options should automatically disable this feature or rise an error so that these simulations work properly.

Potential solution

We can either raise an error, or automatically disable blockettes when any feature that does not have a blockette implementation is used. This can be done at either pyadflow or fortran level.

I suggest modifying the option on pyADflow level automatically if the user requested one of these options but did not specify blockettes in the runscript. This way, the default will be adjusted automatically and no further action is required. If the user specified blockettes, then we should raise an error and quit.

I will also add a section on blockettes to the documentation so that the developers are aware of this coding approach.

Computer freezes when running simple script in parallel

Type of issue

What types of issue is it?

  • Bug

Description

This was discovered when trying to run test_18 in the new testing framework on multiple cores. For some reason it will freeze my compute (a shuttle box) when running with multiple cores, but will run fine on a single core.

Steps to reproduce issue

  1. save the runscript below
  2. copy conic_conv_nozzle.cgns from adflow tests input dir to dir of runscript or modify girdfile option to point to that file
  3. use mpirun -np 2 python freeze.py
# built-ins
import unittest
import numpy as np
import os
import sys
import copy


# MACH
from baseclasses import  AeroProblem
from adflow import ADFLOW


                            
ap = AeroProblem(name='conic_conv_nozzle', alpha=90.0, mach=0.5, altitude=0.0,
                 areaRef=1.0, chordRef=1.0, R=287.87,
                            )



aero_options = {
    'gridfile':'conic_conv_nozzle.cgns',
    'outputdirectory':'./',

    # Physics Parameters
    'equationType':'euler',
    'smoother':'dadi',
    'nsubiter':3,

    'CFL':4.0,
    'MGCycle':'sg',
    'MGStartLevel':-1,

    'nCycles':1000,

    'monitorvariables':['cpu', 'resrho','cl','cd'],
    'volumevariables':['blank'],
    # 'surfacevariables':['mach', 'cp', 'vx', 'vy','vz', 'blank'],


    'useNKSolver':True,
    'nkswitchtol':.01,
    'nkjacobianlag':5,
    'nkouterpreconits':3,
    'nkinnerpreconits':2,
    'nkcfl0':1e10,
    # Convergence Parameters
    'L2Convergence':1e-10,


}


CFDSolver = ADFLOW(options=aero_options, debug=False)

CFDSolver(ap)

Current behavior

Typically computer freezes before iterations complete. This sometimes will happen on the second run with some different options.

Expected behavior

Program exits without locking up my computer.

Code version (if relevant)

Python version:
3.7.7
External dependencies:
Petsc=3.11.0
OpenMPI=3.1.4

Internal packages:
Adflow 2.2.0 - master

A few enhancements for existing tests

Description of feature

A few changes that should be adopted:

  • ensure that checkSolutionFailure and checkAdjointSolutionFailure are run. Also assert true for training
  • store metadata using the new API in baseclasses. At a minimum we should store all ADflow options, and maybe idwarp if it's used. Unless the values need to be retrained, we should only update the JSON files by appending a new key at the end.

Out of date testing instructions in the docs.

Description

As for title, the testing instructions are out of date. We moved to testflo but the online docs are still suggesting the run_reg_tests.py approach.
We should probably release a new tag for ADflow itself?

Steps to reproduce issue

See the docs page

Current behavior

Still illustrates the old reg_tests approach

Expected behavior

Update with the info from the Readme

Code versions

The latest version (Untagged changes after v2.2.1)

Format the code

Description

Once @marcomangano finishes the PR on rotating frames, the python code needs to be formatted and all linting errors fixed.

Potential compilation issue with CGNS 3.4

Just taking a note in case someone run into the same issue:

Recently I was compiling ADflow with the following dependencies under Ubuntu 18.04:

  1. HDF5 1.10.3
  2. CGNS 3.4 (shared, 64bit, fortran/hdf5/parallel-enabled)
  3. PETSc 3.11 (takes care of HDF5)
  4. Python 2.7/3.6

The Python import test fails by undefined symbol: __cgns_MOD_bctypename in Python 2.7, and seg fault in Python 3.6.

It turns out that the Fortran module from CGNS is not compiled correctly. And one solution is available here). Near Line 551 of ./src/CMakeLists.txt, one needs to change

add_library(cgns_shared SHARED ${cgns_FILES})

to

if (CGNS_ENABLE_FORTRAN)
    add_library(cgns_shared SHARED ${cgns_FILES} cgns_f.F90)
else (CGNS_ENABLE_FORTRAN)
    add_library(cgns_shared SHARED ${cgns_FILES})
endif (CGNS_ENABLE_FORTRAN)

Not sure if this bug has been fixed in the more recent CGNS 4.x version.

Migrate the MPhys wrapper here

Type of issue

  • New feature (non-breaking change which adds functionality)

Description

The current om_adflow wrapper for OpenMDAO is slowly becoming depreciated as we are figuring out more details about the MPhys wrapper, which achieves a similar functionality in a more general way. We will remove the current om_adflow wrapper for good and move the MPhys wrapper from the MPhys repository to here once the development is mature enough.

Generate outputs suitable for matplotlib

Currently the slice and lift files are designed to be read by Tecplot only. If someone wants to instead plot using Matplotlib then they have to parse those files themselves.

Instead, add this option directly in ADflow, to generate python and numpy-compatible output files.

Menter SST turbulence model outputs NaNs

Description

Working on wind turbine blades with @marcomangano, I tried to use other turbulence models than the Spalart-Allmaras model which I had been using until now. I know that only sa is differentiated, but I just want to understand the sensitivity of the results to the turbulence model, from an analysis perspective. So I went for Menter's k-\omega SST model (see my options below), and I got NaNs and a PETSc error as off iteration 1 (see the log).

I also tried k omega wilcox and k omega modified, using "turbresscale": [1.0e3,1.0e-6]. There is no NaN in that case, however I found it suspicious that the two turbulence residuals displayed for monitoring are exactly equal to 0.0. Looks like it is the same at iteration 0 with Menter's model.

Less important: I noted a typo in the doc: the option to specify the rescaling parameters for turbulence models is turbresscale instead of turbresscalar.

Looking forward to your insights on what could be causing this.

Options

{'adjointl2convergence': 1e-09,
 'adjointmaxiter': 500000,
 'adjointsubspacesize': 500,
 'adpc': True,
 'anklinresmax': 0.1,
 'ankmaxiter': 60,
 'anksecondordswitchtol': 0.01,
 'cfl': 1.5,
 'cflcoarse': 1.5,
 'equationtype': 'RANS',
 'gridfile': 'UAE_1blade_short_3pitch_coarse_5_L1_3D_128.cgns',
 'l2convergence': 1e-11,
 'l2convergencecoarse': 1e-05,
 'lowspeedpreconditioner': True,
 'mgcycle': 'sg',
 'monitorvariables': ['cpu', 'resrho', 'cd', 'resturb', 'yplus'],
 'ncycles': 30000,
 'ncyclescoarse': 250,
 'nkinnerpreconits': 2,
 'nkjacobianlag': 3,
 'nkouterpreconits': 3,
 'nksubspacesize': 100,
 'nkswitchtol': 1e-14,
 'nsubiterturb': 10,
 'restrictionrelaxation': 1.0,
 'smoother': 'dadi',
 'surfacevariables': ['cp', 'mach', 'yplus', 'sepsensor', 'p', 'temp'],
 'turbresscale': [1000.0, 1e-06],
 'turbulencemodel': 'menter sst',
 'useanksolver': True,
 'useblockettes': False,
 'usenksolver': True,
 'useqcr': True,
 'userotationsa': True,
 'volumevariables': ['resrho', 'resturb', 'vort', 'mach']}

Log

#
# Grid 1: Performing 30000 iterations, unless converged earlier. Minimum required iteration before NK switch:      5. Switch to NK at totalR of:   0.43E-07
#
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
#  Grid  | Iter | Iter |  Iter  |   CFL   | Step | Lin  |    Wall    |        Res rho         |       Res kturb        |       Res wturb        |        C_drag          |        totalRes        |         Y+_max         |
#  level |      | Tot  |  Type  |         |      | Res  | Clock (s)  |                        |                        |                        |                        |                        |                        |
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
      1       0      0     None   0.00E+00  1.00   ----  0.23527E+00   0.4008312643096706E+03   0.0000000000000000E+00   0.0000000000000000E+00   0.6116832261115712E-01   0.4296967046625193E+07   0.4770154773966238E+01
 Bad Block:  -58.457080994455325                            NaN                       NaN                       NaN                       NaN  -136.43358162455107                            NaN                       NaN                       NaN                       NaN  -27.057667980828771                            NaN                       NaN                       NaN                       NaN  -21.493860824869575                            NaN                       NaN                       NaN                       NaN   0.0000000000000000                            NaN                       NaN                       NaN                       NaN
 irow:     1963028
 icol     1802688
 nn:           1
 ijk:           6           6           2
 ---------------------------------------------------------------------------
PETSc or MPI Error. Error Code  1. Detected on Proc 51
Error at line:   630 in file: ../adjoint/adjointUtils.F90
 ---------------------------------------------------------------------------
 Bad Block:  -27.121688365402417                            NaN                       NaN                       NaN                       NaN -0.32959771301234786                            NaN                       NaN                       NaN                       NaN   12.760238156574124                            NaN                       NaN                       NaN                       NaN   3.1770896941151587                            NaN                       NaN                       NaN                       NaN   0.0000000000000000                            NaN                       NaN                       NaN                       NaN
 irow:     1320948
 icol      849908
 nn:           1
 ijk:           6           6           1

...

Code versions

  • ADflow v2.2.1
  • Python 3.7.2

python adflow/pyWeightAndBalance.py missinge mdo_import_help

Hello,

I'm a student in aeronautics from France, a bit of experience
with python, and doing a group job for a glider design project.

While testing and trying to run your adflow/pyWeightAndBalance.py
I keep failing because extension module mdo_import_help is missing.

I'm getting the error message:
ModuleNotFoundError : No module named 'mdo_import_helper'

I cannot find anything named mdo_import_helper that comes with Python.

Is mdo_import_help some sort of a dependency of your code,
which I'm trying to run ?

Hopefully some instructions could tell me exactly what else I need to
install and to be able to use it.

Thank you very much,

Mirabelle

Test 18 fails with CGNS 4.1.2

Type of issue

What types of issue is it?
Select the appropriate type(s) that describe this issue

  • Bugfix (non-breaking change which fixes an issue)

Description

I'm putting this here just for documentation. The whole MACH stack works fine with CGNS 4.1.2, this is the only failed test (test18):

#--------------------------- !!! Error !!! ----------------------------
#* Terminate called by processor 0
#* Run-time error in procedure readRestartVariable
#* Error message: Something wrong when calling cg_field_read_f
#*
#* Now exiting
#----------------------------------------------------------------------

I didn't see any relevant entries in the changelog for cg_field_read_f, so not really sure what happened.

Migration to Testflo for regression testing

Type of issue

  • Other: Code modifications required to change the underlying testing suite to testflo.

Description

This issue will contain the discussions regarding the migration of adflow testing to testflo. The modifications in PR #26 to migrate the testing suite to testflo is missing a number of important features. The development related to this will be pulled into the testflo branch of adflow, and the changes will be merged to the master branch once all expected functionality is provided.

Current behavior

Here is a list of issues (in no particular order) that I think should be addressed. This is not an exhaustive list and we will have the discussions related to these items in this issue page.

  1. Individual tests do not provide feedback as they finish, which makes it difficult to determine the current progress.
  2. Tests don't run in order, e.g. test 18 runs before test 1, etc.
  3. No complex tests.
  4. No output from tests. The output from the test runs should go into a file for each test for easier debugging.
  5. No documentation on obtaining the input files required for tests. Existing documentation for input files is in a readme file in a directory that is a few levels down the main directory. There are no external links (that I could see) to this readme. The directory name is changed from inputFiles to input_files
  6. It's not clear how many processors is used for each test. This may be related to testflo, but either way, this should be clearly documented.
  7. If multiple tests run at the same time, it should be guaranteed that tests do not try to over-write to same output files.
  8. The tests generate a lot of intermediate files that are not cleaned up and appear in the git status.
  9. No documentation in general for simple testflo options.
  10. If a value does not match in the reg tests, the output should also state what the value means (dot product test of this routine etc.).
  11. Old test file and folder structure are still there.

Expected behavior

Here is a list of expected behavior for all the issues I raised above (with the same numbering):

  1. Individual tests should provide feedback as they finish, regardless of them failing or succeeding.
  2. Tests should run in order, e.g. test 1, then 2, then 3, up to 18. Although this is not strictly required, it will become tricky to figure out when a particular test ran if this order is not followed. If tests run in order, I know that test 3 did not run yet since test 2 is still running. If this order is not preserved, I would need to go through the full output to see if that particular test ran already. This will become a larger issue as we add more tests.
  3. The suite should also contain complex tests.
  4. Each test should write its output to an output file for easier debugging. These files can be put in a folder, which should be included in the gitignore list.
  5. Documentation needs to be greatly improved. The readme file in the root adflow directory should point to the testing documentation directly. This documentation should include directions on how to obtain the input files. The documentation should also include various testflo options that can be used to run the tests, and possibly contain a link to the testflo documentation itself.
  6. It should be very clear how the tests are run, with how many processors. "each test runs on 4 processors" is not enough information. Do they run in serial or parallel? How does testflo determine how many tests to run in parallel. The documentation should include the options to specify the exact numbers of how many processor per test, and how many tests at a given instance.
  7. The tests should not include any i/o race, and use different output files. Same input files are ok.
  8. The intermediate files created as the tests run should be deleted by default. There should be an option to prevent this cleanup to keep the files in place. If this is not possible, the files should be moved to a folder (by changing the output directory in tests). In both of these scenarios, any intermediate file that gets created should be included in the gitignore list.
  9. The documentation should include a few useful testflo options.
  10. When a value does not meet the tolerance, the description of the value should be printed for easier debugging.
  11. All of the old test files and any references to them should be removed.

Missing `parameterized` module from dependencies

Description

With the overhauling of the testing infrastructure, it seems like we missed to include parameterized as a required package.
If you don't have it already installed, some of the tests will fail due to the import error (only some tests are affected, see below).
It's an easy fix, I document it for sake of completeness.

Steps to reproduce issue

If you don't have parameterized available, just run testflo regularly.

Current behavior

you will get this in the terminal output:

The following tests failed:
test_adjoint.py:../tests/reg_tests/test_adjoint.py
test_functionals.py:../tests/reg_tests/test_functionals.py
test_solve.py:../tests/reg_tests/test_solve.py

Looking at the error messages, they are all related to this import error

Expected behavior

All the tests should succeed.
If you manually pip install parameterized, the bug is fixed.

I recommend to include parameterized in the install_requires to avoid any future issues.

Code versions

  • Python 3.8
  • Testflo 1.4.2
  • ADflow 2.2.1

Update OM wrapper to use new API

Type of issue

What types of issue is it?
Select the appropriate type(s) that describe this issue

  • Bugfix (non-breaking change which fixes an issue)

Description

With the release of openMDAO v3, many deprecated API have finally been removed. This means that the om_adflow wrapper no longer works with OM3. This in and of itself is not an issue. However, those who have the new version of openMDAO installed on their computer, but do not need to use the OM wrapper in ADflow, cannot import and use the standard ADflow functionalities. This is due to the way imports are done in the __init__ file.

Current behavior

With openMDAO v3 installed, pyADflow cannot be imported.

Expected behavior

pyADflow should be usable.

External dependencies:

  • openMDAO v3.x

Which papers to cite

Type of issue

What types of issue is it?

  • Documentation update

Description

  • It's currently unclear which papers to cite. Should all 3 in the readme be cited? Is the choice left up to the user? ("Please cite ADflow in any publication for which you find it useful." in the readme)
  • Also need to add https://arc.aiaa.org/doi/10.2514/6.2017-0357 for those using overset meshes.

Document and update adflow options

Type of issue

  • Documentation update

Description

We need to put some effort into documenting all adflow options.
Current list https://mdolab-adflow.readthedocs-hosted.com/en/latest/options.html list is outdated and is missing many options.

Preferred solution

This documentation should preferably be generated automatically. All options exist in pyADflow.py and default values etc. should be set from what is found there. The issue is with the detailed descriptions on what each parameter does. Either we can include a detailed description in pyADflow.py, or we can write them elsewhere and combine with an automated process.

Update ANK to work with complex step

Type of issue

  • New feature (non-breaking change which adds functionality)

Description

Currently ANK solver does not work with the complex step (CS) method. Furthermore, the behavior of the NK solver with (CS) is not well documented. We should fix and document both solvers with complex step so that they can be used for CS verification studies.

Some problems about multipointSparse

Good day, John.
Sorry for disturbing you. I'm a PhD student from NWPU. I have already installed "MDOlab code" you uploaded several days ago. These codes are really helpful for me. I followed MACH Tutorial to learn to use these codes. The example for ADflow analysis, illustrated in http://mdolab.engin.umich.edu/docs/packages/machtutorial/doc/aero_adflow.html, was tested successfully in my own PC. I used 1core and 4 cores to run aero_run.py respectively, and it appeared all right. However, when I ran aero_opt.py, some problems occurred. First, the multipoint package was not open source, so there was an error because the python script imported module multipoint.

xcz@ubuntu:~/Desktop/ADflow_test/opt_wing$ python aero_opt.py
Warning:` OpenMDAO dependency is not installed. OM_ADFLOW wrapper will not be active
Traceback (most recent call last):
File "aero_opt.py", line 14, in
from multipoint import *
_No module named multipoint

For all the aero_opt or aerostruct_opt cases in the tutorial, the scripts import module multipoint. Hence, I commented out the lines with multipointSparse in aero_opt.py myself, and modified the script with no reference script. Then I got a problem with MPI erroror after ADflow finished in the process.

......
writetecplotsurfacesolution': True}
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.

[1]PETSC ERROR: ------------------------------------------------------------------------
[1]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the batch system) has told this process to end
[1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
......

I have added my script in the attachment. Could you help me to modify my script? BTW, will the multipoint package be open source soon? Thanks a lot.

aero_opt.py.txt

aero_opt.py.txt

How to change coordinates the in ADflow?

Good day.
Sorry for disturbing you. I have already installed ADflow, and want to run CRM wing case. In the gird Baseline 3M grid in block-structured CGNS format I download from MDOlab web, wing span is in y direction. For ADflow, is the defult coordinate for wing span is z axis? If so, how should I set parameter in aero_run.py to rotate the calculation axises?

I found some pecifics in tutorial , and I'm not sure if this is axis definition?
image

Thanks a lot.

Clean up obsolete forks

Type of issue

What types of issue is it?
Select the appropriate type(s) that describe this issue

  • Repo maintenance

This repository has many stale forks that have not been touched in years. Go through all the forks and delete those which are obsolete. If there's any useful code in the other forks, attempt to revive them, otherwise they should be thrown out too.

Clear out compiler warnings and remarks

Type of issue

  • Code formatting

Description

We have a bunch of warnings and remarks generated during compilation of ADflow. PR #52 addresses most of these issues; however, we still have some warnings remaining:

  1. Complex builds still complain about missing modules. @nwu63 and I will work on this with the PR #52.
  2. The forces array is defined as an output variable with the getForces_b routine, but is not used. I did not modify this now, but we can edit the call signature and/or modify the intent. The warning looks like this:
../warping/getForces.F90(124): warning #6843: A dummy argument with an explicit INTENT(OUT) declaration is not given an explicit value.   [FORCES]
  1. With intel compilers, we get a lot of remarks about output formats from fortran files. These remarks look like this:
remark #8291: Recommended relationship between field width 'W' and the number of fractional digits 'D' in this edit descriptor is 'W>=D+7'.

Here is the explanation of this from doctor fortran himself: https://software.intel.com/en-us/forums/intel-fortran-compiler/topic/635137
This can be easily fixed, but we need to check if all outputs look okay after the fix.
4. There are a bunch of warnings generated by the f2py wrapping. These contain:

warning #1224: #warning directive: "Using deprecated NumPy API, disable it by "          "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION"

and a bunch of

libadflowmodule.c(11948): warning #556: a value of type "void (*)(int *, int *, void (*)(char *, int *), int *)" cannot be assigned to an entity of type "f2py_init_func"
    f2py_communication_def[i_f2py++].func = sendrequests;

Better handling of unsupported functionalities in complex build

Description of feature

ADflow should exit with a clear error message when running unsupported functionalities in complex build

  • ANK/other solution strategies
  • adjoint solution
  • probably many other things...

Potential solution

Add a bunch of raise Error() in pyADflow when running complex mode. We will also have to identify exactly what is and isn't supported under complex.

issues with Docker

I am exploring use of ADFLOW so wanted to try out docker image before going into building it myself. So this is Not a bug but difficulty in using docker image. I have installed Ubuntu through WSL2 on Windows 10 machine. I tried using docker pull command tin order to be able to run ADFLOW, but got this message :
docker pull mdolab/public
Using default tag: latest
Error response from daemon: manifest for mdolab/public:latest not found: manifest unknown: manifest unknown
PS C:\Users\hknar> docker pull mdolab/public
Using default tag: latest
Error response from daemon: manifest for mdolab/public:latest not found: manifest unknown: manifest unknown_
**

I am not clear as to what is it I am doing wrong, can anyone PL help
Narahari

Update default options list in tests

Description

The default options list specified here should be reviewed and updated

  • we had decided to use a minimal list of options that mirror common usage, rather than specifying everything. That way, if the default options have changed, this will detect it. Also, this is closer to common usage (which also relies on default options)
  • this dictionary should use caseInsensitiveDict such that each test which calls update() can be sure that no duplicate keys with different capitalization gets entered.

Spectral Radii zeroed after getResiduals() with blockettes

Type of issue

Bug

Description

After you instantiate an ADflow instance with a restart file.
If you call

CFDSolver.getResiduals(ap)
CFDSolver.computeJacobianVectorProductBwd(....)

you get the wrong answer

But if you call

CFDSolver.getResiduals(ap)
CFDSolver.computeJacobianVectorProductFwd(....)
CFDSolver.computeJacobianVectorProductBwd(....)

you get the right answer

If you trace the error back you see that this is because the spectral radii (radI, radJ, radK) aren't set by getResiduals() with the default options

If you set the option `useblockettes' to False there is no issue. So it seems like the spectral radii are being zeroed before use in master_b when using blockettes.

if you add an extra call to timeStep_block at the beginning of master_b()
it fixes the issue. So i'm very certain it is because of the radii

Expected behavior

Both should give the same answer

Code version (if relevant)

Python version: 2.7
External dependencies:
Petsc 3.11
OpenMPI 3.1.4

Internal packages:
latest of everything

Spectral radii is out of date in some scenarios

Type of issue

Bug

Description

The cache-optimized residual routine (blockette residual) does not update the spectral radii arrays in the main memory. This routine compute these intermediate variables using the cached memory but does not copy the computed spectral radii values back to main memory. This later on causes issues because the reverse mode AD routines rely on the spectral radii values being up to date.

Steps to reproduce issue

There are multiple ways to run into this issue. I will lay out two scenarios. The issue only comes up if 'useblockettes' option is set to True, which is the current default.

Scenario 1:

Analysis + adjoint

  1. A regular adflow analysis (call the pyadflow object) with 'useblockettes': True, which is the default option. This way, the spectral radii (radi, radj, radk arrays) are not updated as the solver converges.
  2. A call to evalFunctionsSens after the analysis will set up the adjoint solver. During this step, if the user did not specify 'adpc':True, which is not the default option, the spectral radii arrays will be wrong. Any reverse mode AD call after this point (until the fwd mode AD is run) will be wrong.
  3. This will cause inaccurate sensitivities.

Going back to step 2, if the users have 'adpc':True, the code runs fwd AD routines for the preconditioner computations before any reverse AD call, which update the mentioned arrays and this avoids the problems.

Similarly, having 'nkadpc':True with the NK solver has a similar effect. However, with this case, the arrays will still be outdated because they will be the values calculated at the last NK preconditioner computation.

If an AD preconditioner is never done with the NK or the adjoint solvers, the spectral radii arrays will have the values computed right after flow initialization and before solution starts (https://github.com/mdolab/adflow/blob/master/src/solver/solvers.F90#L1025), and will be wrong.

Scenario 2: Standalone reverse AD call:

This scenario is explained in issue #32

Current behavior

Spectral radii is not updated in many cases.

Expected behavior

Before any reverse mode AD call, this intermediate variable should have the correct value computed with the updated state.

Code version (if relevant)

Independent of any code version

Analysis process divergence

If only DADI is used during the analysis, it will converge. However, it is easy to diverge when DADI combines ANK and NK solver are used together. How to solve this problem?
error
1573102961(1)

Zipper mesh bug

Fix the bug causing the gradients to be off when using zipper meshes.

Unsteady timestep doesn't stop after l2Convergence is reached

Type of issue

What types of issue is it?

  • Bug (probably?)

Description

In an unsteady RANS simulation, the solver doesn't stop when the l2Convergence is reached for a given timestep.

Steps to reproduce issue

Run the attached files.

Current behavior

The following behavior only happens after the first time step. The first time step works fine.

If the RK or DADI solver is used, the solver runs until nCycles is reached.
If the ANK solver is used, the CLF number goes to 0 after l2Convergence is approximatly reached. Even though there is an ankcflmin of 1.0.
The NK solver just stops making progress after l2Convergence is approximatly reached.

Expected behavior

Once l2Convergence is reached, the solver should stop executing the current timestep and start the next one.

Code version (if relevant)

adflow 2.2

Doxygen is missing

Doxygen is not set up on the new docs site, but it is possible with readthedocs.

Type of issue

What types of issue is it?
Select the appropriate type(s) that describe this issue

  • Documentation update

Request for WIndows Binary

It is not an issue but a humble request from a CFD teacher. Most of my students use laptops and Windows 10 environment. It would be a great help if a vanila version of the .exe for windows 10 code is included . Hence this request which can help the students explore MDO option along with CFD first and explore the code eventually. Apologies for such a dumb request ..
Thanks in advance
Narahari
_

Deprecation warnings

Description

Running ADflow causes a number of DeprecationWarnings.

/home/mdolabuser/repos/adflow/adflow/pyADflow.py:3623: DeprecationWarning: invalid escape sequence \*
  """This the main python gateway for producing forward mode jacobian
/home/mdolabuser/repos/adflow/adflow/pyADflow.py:3800: DeprecationWarning: invalid escape sequence \*
  """This the main python gateway for producing reverse mode jacobian
/home/mdolabuser/repos/baseclasses/baseclasses/BaseSolver.py:184: DeprecationWarning: printCurrentOptions is deprecated. Use printOptions instead.
  warnings.warn("printCurrentOptions is deprecated. Use printOptions instead.", DeprecationWarning)

The first two are probably related to the way the docstrings are set up. The last one is due to an API change in baseclasses, simply rename printCurrentOptions with printOptions.

Update the performance section in docs

Description

The performance section is seriously out of date. We should update it with what we now know, together with some best practices for improving performance.
Some questions:

  • How many procs to request for "best" performance?
  • What's the most number of procs that can be reasonably requested to improve walltime?
  • What's the memory requirements for different options? Is this info still up to date?
  • Adjoint performance?
  • How to load balance for multipoint problems?

test and debug the agglomerated multigrid preconditioner

Type of issue

  • Bugfix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)

Description

The agglomerated multigrid (AGMG) preconditioner was added to ADflow a while back but never got tested extensively. This functionality should be tested and debugged so that we can use it regularly, as it possibly provides better scaling.

Reduce ADflow Compile Time

Type of issue

What types of issue is it?

  • Optimization

Description

It takes an order of magnitude longer to compile ADflow after updating to the newest PETSC.
If others are experiencing this issue, I think it warrants attention as it is a significant frustration in the debugging process

Update the adflow default options

The default options for adflow are not up to date with recent development.
I suggest that we examine the options and make a new set of defaults

Some changes I think should be made...

'useank': True # it's been thoroughly vetted and is now a key feature of adflow
'useblockettes: False # This feature didn't get enough testing. We are still finding bugs with it. Its better used as something that can be turned on after it's known the optimization is working well.
'equationtype':'rans' # Most everyone uses adflow for rans nowadays 

What do you all think?

run_reg_tests.py fails

I've managed to install ADflow with all the necessary prerequisite modules. However, I get the following message towards the end when I run make in my terminal.

Testing if module libadflow can be imported...
Module libadflow was successfully imported.

Note, it says libadflow instead of just adflow. Not sure if this is OK. I ran the run run_reg_tests.py script in python/reg_tests/ folder and I get the following errors

Error: number of @value lines in file not the same!
adflow test1: Failure!
Error: number of @value lines in file not the same!
adflow test2: Failure!
Error: number of @value lines in file not the same!
adflow test3: Failure!
Error: number of @value lines in file not the same!
adflow test4: Failure!
Error: number of @value lines in file not the same!
adflow test5: Failure!
Error: number of @value lines in file not the same!
adflow test6: Failure!
Error: number of @value lines in file not the same!
adflow test7: Failure!
Error: number of @value lines in file not the same!
adflow test8: Failure!
Error: number of @value lines in file not the same!
adflow test9: Failure!
Error: number of @value lines in file not the same!
adflow test10: Failure!
Error: number of @value lines in file not the same!
adflow test11: Failure!
Error: number of @value lines in file not the same!
adflow test12: Failure!
Error: number of @value lines in file not the same!
adflow test13: Failure!
Error: number of @value lines in file not the same!
adflow test14: Failure!
Error: number of @value lines in file not the same!
adflow test15: Failure!
Error: number of @value lines in file not the same!
adflow test16: Failure!
Error: number of @value lines in file not the same!
adflow test17: Failure!
Error: number of @value lines in file not the same!
adflow test18: Failure!

Thanks for your help.

2D wing rotation calculation can't get a result

Good day,
Sorry for disturbing you. I'm a graduate student from NPU. I have already install 'adflow' .But when i use this code to run a steady case that a 2D wing in x-y plane rotating on the z axis with far-field speed is zero,there is something puzzling happened.Only when the rotating speed is smaller than 0.2 rad/s,this code can get a result,a lager rotating speed will get a nan in the first step of iteration.I don't know why this happening.The radius of rotation is 40m,the chord of wing is 1m .And in the rotating speed of 0.2 rad/s,the time cost is 3 or more times as many as that a 2D wing in the free stream .is this the problem of low-mach precondition?I read some part of this code ,there something about "low speed precondition". in my jejune option ,it's Incomplete.Looking forward to your reply.

Testflo default NUM_PROCS causes tests to fail on centos with Intel compilers

Description

When the tests are run on Intel with testflo -v . or with testflo -v -n 5 (or greater) the tests encounter a bus error

Caught signal number 7 BUS: Bus Error, possibly illegal memory access

It is not well understood why this issue occurs, but it can easily be avoid by passing testflo the arguement -n followed by a number less than or equal to 4 (this maybe greater if your machine has more than 4 cores).

Steps to reproduce issue

Please provide a minimum working example (MWE) if possible

  1. run lastest centos intel docker image
  2. navigate to adflow folder and download the test input files
  3. run the tests with testflo .

Code versions

List versions only if relevant

  • Python 3
  • ADflow 2.2.1

getFreeStreamResidual method changes functionals for viscous equation types

Type of issue

What types of issue is it?
This is a bug

Description

The pyADFlow function getFreestreamRes is causing a state change for viscous equation types ('laminar ns', 'rans').

This is a problem since many of our old regression test use the helper function standard test, which calls getFreestreamRes and then evaluates the functionals.
This means that functional values is some reference files are wrong.

Steps to reproduce issue

call evalFuntions
call getFreestreamRes
call evalFunctions # this will give you a different answer

see runscript

Expected behavior

call evalFuntions
call getFreestreamRes
call evalFunctions # this will give you the same answer

Code version (if relevant)

Python version: … 2.7

External dependencies:

Internal packages:
ADFlow 2.1.0

Re-generating differentiated code using latest version of Tapenade (3.16)

I'm trying to differentiate some of the relevant residual routines with respect to the Spalart-Allmaras model constants, although this issue is not specific to that application. I installed the latest version of Tapenade (3.16), and following the instructions in the documentation for adflow, I was able to successfully generate the differentiated code that I need.

However, I noticed that the current differentiated code was generated using Tapenade v3.10. The newly generated code's files are all named differently, e.g. in /adflow/src/adjoint/outputForward/

sa_b.f90 (3.10)
sa.F90_b.f90 (3.16)

and the new version appends iff to some modules in use by the differentiated routines, e.g.

use blockpointers (3.10)
use blockpointersiff (3.16)

but blockpointersiff doesn't exist. From my perspective, I could try going through the newly-generated code manually and copying the relevant lines to the old formatted differentiated code. I have been trying to build the older version of Tapenade on my system as well, but have had difficulties doing so. I'm curious as to what the best approach to this problem would be for me at the moment, whether I should focus on building the older version of Tapenade or retool the existing differentiated files manually.

Verification and validation

Type of issue

Documentation update

Description

Document where verification and validation results and cases can be found for ADflow.

  • Currently unclear what has been verified and validated and what hasn't.
  • This makes it challenging to explain discrepencies, troubleshoot, and maintain trust in the long term.
  • 2-D & 3-D
  • Equations types (Euler, laminar NS, RANS)
  • Turbulence models
  • Overset cases
  • Rotating reference frames
  • Actuator zones
  • etc.

Build difficulty

As was suggested in a previous post I installed Ubuntu through WSL2 on windows 10 laptop. on my forst attempt I got the message "python nor found" so I installed that (version 2.7 i think) When I tried to build the executable second time I ran into the following problem :

make[2]: *** No rule to make target '/lib/petsc/conf/variables'. Stop.
make[2]: Leaving directory '/mnt/d/Aero_Auto/cfd_HKN/adflow-master/adflow-master/src/build'
Makefile:30: recipe for target 'adflow_build' failed
make[1]: *** [adflow_build] Error 2
make[1]: Leaving directory '/mnt/d/Aero_Auto/cfd_HKN/adflow-master/adflow-master'
Makefile:6: recipe for target 'default' failed
make: *** [default] Error 2

What am I doing wrong? Having worked on IBM mainframe and WIndows machines I am at a loss here. Any suggestions would be of immense help.
Regards
Narahari

Discontinuities in post-processed twist and t/c distributions

Type of issue

What types of issue is it?

  • Bugfix (non-breaking change which fixes an issue)

Description

The post-processed twist and t/c distributions that ADflow outputs sometimes have spurious discontinuities and jumps. This is related to the script jumping between the upper and lower trailing edges of the geometry, or some equivalent jump, while calculating the twist using the farthest two points in a section.

For examples, see the twist and t/c plots in:

Expected behavior

There shouldn't be spurious discontinuities and jumps.

Add warning for reverse mode routines

As it is, master_b doesn't do a forward pass so some other routine that sets the variables must be called first. Otherwise, computeJacobianVectorProductBwd() will return garbage.

We should add a warning (on the python layer?) to make sure folks know this so they don't waste their time like me.

non-default bug: second order terms for turbulence advection may not be enabled

Type of issue

This is a non-default bug; by default, we use first order advection terms for turbulence models, therefore, this issue does not come up with 99% of the cases.

The second order advection terms for the turbulence model is only activated if we use the turbSolveSegregated function from turbAPI: https://github.com/mdolab/adflow/blob/master/src/turbulence/turbAPI.F90#L33-L38
This solver is only used along with a smoother, or with de-coupled ANK when the ANK turbKSP solver is not used.

The variable is also set in turbResidual in the same module, but that method is only called with explicit time marching with the RK method.

This variable is not set during initialization because we only want this to be active on the finest mesh, so the fix should be done in blockette residual routines.

Most users don't use this non-default option, so this won't cause issues. On top of this, most people start with the default ANK + turbDADI solver, and this combination does set the correct option. Problems only arise when users want to use a Newton based solver for turbulence from the initialization. This possibly causes issues with restarts as well, where there initial residual might be low enough so turbDADI solver is not used.

We should also check what was happening when this variable is never set properly. Then the solver would decide on the if checks based on uninitialized value of the secondOrd variable in turbMod.

Current behavior

Second order terms for turbulence advection is not activated if turbulence is solved purely with a Newton based solver.

The second order is set if we use the DADI solver for turbulence at any point in the iteration process.

Expected behavior

Regardless of solver settings, if user specified second order advection terms for turbulence, they shall get second order turbulence terms.

Inlet mass flow rate BC

Hi,
I'm trying to define an inflow mass flow rate by using BCInflowSubsonic, namely: defining the denity and velocity magnitude.
The cgns file passed the internal checks through cgnview feature, but ADFLOW fails with a massage: "Not enough parameters for INFLOW boundary..."
Is there any other way to implement mass flow rate boundary condition? maybe by BCMassBleedInflow?

Thanks for your help,

Shlomy

Type of issue

What types of issue is it?
Select the appropriate type(s) that describe this issue

  • Bugfix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (non-backwards-compatible fix or feature)
  • Documentation update
  • Other (please describe)

Description

A description of the issue.

Steps to reproduce issue

Current behavior

Expected behavior

Code version (if relevant)

Python version:

External dependencies:

Internal packages:

"V" not available as a design variable

Description

We currently cannot include the inflow velocity V among the design variables due to an incomplete variable setup routine.
This input variable must be included in the available set, which already includes mach, alpha, etc...
The bugfix will also require modifications to BaseClasses

Steps to reproduce issue

On a working runscript for ADflow, include 'V' among the design variables with:
ap.addDV('V') where ap is your AeroProblem instance.

Current behavior

The initialization terminates abruptly with the following message:

File "<path-to-baseclasses>/baseclasses/pyAero_problem.py", line 525, in addDV
    raise ValueError('%s is not a valid design variable' % key)
ValueError: V is not a valid design variable

Expected behavior

ADflow should run and evaluate the sensitivities, including those w.r.t. the inflow velocity

Code versions

  • BaseClasses v1.2.2
  • ADflow v2.2.1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.