Giter VIP home page Giter VIP logo

berkeleylab / inference-engine Goto Github PK

View Code? Open in Web Editor NEW
24.0 11.0 6.0 36.15 MB

A deep learning library for use in high-performance computing applications in modern Fortran

License: Other

Fortran 96.81% Shell 2.72% Gnuplot 0.47%
deep-learning fortran2018 inference artificial-intelligence machine-learning neural-networks ann artificial-neural-networks feedforward-neural-network neural-network

inference-engine's People

Contributors

everythingfunctional avatar federicavil avatar jordanwelsman avatar kareem-weaver avatar ktras avatar rouson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

inference-engine's Issues

Suggestion: short description on repo page

We should add a short description to the repo. Something along the lines of:
"A Fortran 2018 <package/module> which enables inference using trained dense neural networks."

Issue Compiling with ifx

Here's the error I get trying to compile with ifx:

layer_s.f90                            failed.
[ 57%] Compiling...
././src/inference_engine/layer_s.f90(22): error #6197: An assignment of different structure types is invalid.   [CONSTRUCT]
    layer%neuron = neuron_t(layer_lines, start+1)
-------------------^

Remove nagfor workaround

There is a workaround for a nagfor bug in the src dir that can be removed because the latest version of nagfor should have fixed this.

  • Remove macro and test to make sure it works
  • move .F90 to .f90

Train additional Thompson microphysics procedure proxies

The [learn-microphysics-procedures.f90] example demonstrates how to train a neural network to model functions in the mp_thompson.f90 module. A possible next step could be modeling the rest of the procedures in the same module. The full set of procedures is below.

Functions

  • REAL FUNCTION GAMMLN(XX)
  • REAL FUNCTION GAMMP(A,X)
  • REAL FUNCTION WGAMMA(y)
  • REAL FUNCTION RSLF(P,T)
  • REAL FUNCTION RSIF(P,T)

Subroutines

  • SUBROUTINE mp_gt_driver(
    - arguments:
    qv, qc, qr, qi, qs, qg, ni, nr, &
    th, pii, p, dz, dt_in, itimestep, &
    RAINNC, RAINNCV, &
    SNOWNC, SNOWNCV, &
    GRAUPELNC, GRAUPELNCV, SR, &
    ids,ide, jds,jde, kds,kde, &             ! domain dims
    ims,ime, jms,jme, kms,kme, &             ! memory dims
    its,ite, jts,jte, kts,kte               ! tile dims
  • subroutine mp_thompson
    - arguments:
     qv1d, qc1d, qi1d, qr1d, qs1d, qg1d, ni1d, &
     nr1d, t1d, p1d, dz1d, &
     pptrain, pptsnow, pptgraul, pptice, &
     kts, kte, dt, i, j
  • subroutine qr_acr_qg
    - arguments: none
    - reads/writes: qr_acr_qg_mpt.dat
    - module variables accesses: ____
  • subroutine qr_acr_qs
    - arguments: none
    - reads/writes qr_acr_qs_mpt.dat
    - module variables accesses: ____
  • subroutine freezeH2O
    - arguments: none
    - reads/writes: freezeH2O_mpt.dat
    - module variables accessed: ____
  • subroutine qi_aut_qs
    - arguments: none
    - no reads or writes
    - module variables accessed: ___
  • subroutine table_Efrw
    - arguments: none
    - reads/writes: none
    - module variables accessed: ___
  • subroutine table_Efsw
    - arguments: none
    - reads/writes: none
    - module variables accessed: ___
  • subroutine table_dropEvap
    - arguments: none
    - reads or writes: none
    - module variables accessed: ___
  • SUBROUTINE GCF
    - arguments: GAMMCF, A, X , GLN
    - reads or writes: none
    - module variables accessed: ___
  • SUBROUTINE GSER - arguments: GAMSER, A, X, GLN`
    - reads or writes: none
    - module variables accessed: ___

Prep for opening the source

  • Add LICENSE.txt file with copyright notice and license agreement
  • Add statement referring to the license at the top of each source file
  • Add build instructions to the README.md
  • Add a basic ford project file
  • Set up the CI to post the ford documentation to GitHub Pages

Support LLVM Flang

Currently, the command

fpm test --compiler flang-new --flag "-mmlir -allow-assumed-rank"

yields the trailing output

[ 57%]        inference_engine_m_.f90  done.

error: Semantic errors in ././src/inference_engine/inference_engine_m_.f90
./././src/inference_engine/inference_engine_m_.f90:72:32: error: Result of pure function may not have polymorphic ALLOCATABLE ultimate component '%activation_strategy_'
        type(inference_engine_t) inference_engine
                                 ^^^^^^^^^^^^^^^^
./././src/inference_engine/inference_engine_m_.f90:30:50: Declaration of 'activation_strategy_'
      class(activation_strategy_t), allocatable :: activation_strategy_ ! Strategy Pattern facilitates elemental activation
                                                   ^^^^^^^^^^^^^^^^^^^^
./././src/inference_engine/inference_engine_m_.f90:104:24: error: Result of pure function may not have polymorphic ALLOCATABLE ultimate component '%activation_strategy_'
        type(exchange_t) exchange
                         ^^^^^^^^
./././src/inference_engine/inference_engine_m_.f90:52:50: Declaration of 'activation_strategy_'
      class(activation_strategy_t), allocatable :: activation_strategy_ ! Strategy Pattern facilitates elemental activation
                                                   ^^^^^^^^^^^^^^^^^^^^
<ERROR> Compilation failed for object " src_inference_engine_inference_engine_m_.f90.o "
<ERROR> stopping due to failed compilation
STOP 1

Isolate and report NAG compiler bug

git checkout add-file-reader
fpm test --compiler nagfor --flag -fpp

...

NAG Fortran Compiler Release 7.1(Hanzomon) Build 7113
Questionable: ./src/inference_engine_s.f90, line 229: Variable C set but never referenced
Panic: ./src/inference_engine_s.f90: free_TBF_item: Invalid item?
Internal Error -- please report this bug
Abort
<ERROR> Compilation failed for object " src_inference_engine_s.f90.o "
<ERROR>stopping due to failed compilation
STOP 1

Feature: concurrent multi-inference

PR #7 contains a skeletal demonstration of concurrent inference using multiple networks.

To Do

  • Finish the space_delimited_strings_to_array() internal function.
  • Define an array of inputs for each network
  • Write a test

The new test could evaluate the XOR truth table concurrently with each do concurrent iteration using an independent copy of the XOR neural network evaluation. This is potentially faster than the previous sequential evaluation of the truth table in the existing tests.

Develop alternative "infer" method(s)

Each of the three-line do concurrent/dot_product blocks in the [infer] type-bound procedure can be collapsed down to a one-line invocation of matmul. By default, it seems likely that most compilers would generate faster code with matmul, but it's best to be able to compare the two approaches with multiple compilers on multiple platforms to determine whether or not matmul is always superior. Scenarios to consider:

  1. Using the compiler's default matmul implementation.
  2. Using a compiler option, if available, to switch to an optimized library version of matmul.
  3. Using do concurrent with with various optimization flags (-O...) set.
  4. Using a compiler option, if available, to offload do concurrent to a GPU.
  5. Using a compiler option, if available, to offload matmul to a GPU.

Option 4 and possibly option 5 are available with the Intel ifx compiler as of the 2022.3 version of oneaAPI released two weeks ago. Option 4 and possibly option 5 has also been available with the NVIDIA nvfortran compiler since about 2 years ago, but nvfortran has limited support for Fortran 2008 and extremely limited support for Fortran 2018. I believe our only 2008 features are do concurrent (which nvfortran supports), module function/module subroutine interface bodies, and submodule, which nvfortran might or might not support. Working around the latter two features would require a lot of code revision, but would not be too painful.

Let's develop an alternative implementation of infer that does this and enable switching between the two with a C preprocessor macro something like

#ifdef DO_CONCURRENT_INFER
  module procedure infer
     ! (concurrent infer implementation)
  end procedure
#else
  module procedure infer
     ! (matmul implementation)
  end procedure
#endif

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.