Giter VIP home page Giter VIP logo

flameai_superres's Introduction

FLAME AI Workshop - Superresolution of Turbulent Data

Workshop Website | Kaggle

This is repository for a submission for the Stanford FLAME AI workshop for superresolving turbulent flow data.

The model used is an SR3 diffusion model taken directly from: https://github.com/Janspiry/Image-Super-Resolution-via-Iterative-Refinement\ The corresponding paper can be found here: Image Super-Resolution via Iterative Refinement(SR3).

An example result from the validation set can be seen below with corresponding low-resolution input and high-resolution ground truth.

show

The SR3 model was run twice, first with [ux,uy,uz] as input channels, then with [rho,uy,uz] as input channels. For the final results ux, uy, uz is taken from the first run, while rho is taken from the second run.

A pretrained model was downloaded from the above SR3 GitHub repository, the pretrained weights can be found here: Google Drive
The instruction on how to use the SR3 model for training and inference can also be found in the above repository


The model was first trained until 1000000 iterations using the above mentioned pre-trained weights and the [ux,uy,uz] version of the turbulence training data. The corresponding .json setup file can be found in config/sr_sr3_16_128_flame.json. The val/train phase switches between training and validation. For the first training the resume_state was set to "pretrain/I640000_E37", which are the pretrained weight from the Google Drive. For inference on the test data the config/sr_sr3_16_test_flame.json was used.
The model then was trained for another 200000 iteration using the [rho,uy,uz] formulation. The corresponding .json files are sr_sr3_16_128_flame_rho.json for training, and sr_sr3_16_128_flame_rho_test.json for testing.

Checkpoints to both are provided on OneDrive, along with the png and .mdb version of datasets. The flame_ai_challenge_process_inputs_and_results.ipynb creates these png files from the original data by rescaling everything to be between 0 and 255. The same notebook also contains postprocessing steps for the results before submission and a few samples from the superresolved results.

Data in png format can be found here: OneDrive

  • /train/ folder has data for training with [ux,uy,uz] channels
  • /train_density/ folder has data for training with [rho,uy,uz] channels
  • /val/ folder has data for validation with [ux,uy,uz] channels
  • /val_density/ folder has data for validation with [rho,uy,uz] channels
  • /test/ folder has data for testing with [ux,uy,uz] channels
  • /test_density/ folder has data for testing with [rho,uy,uz] channels

All have subfolders with hr_128 for the high-resolution 128x128 data and lr_16 for the low-resolution 16x16 data. These were generated by the flame_ai_challenge_process_inputs_and_results.ipynb\

The data has to be preprocessed for the SR3 model, the already preprocessed data folder can be found here: OneDrive, in the folders named flame_ai_16_128_prepared_... .This was generated by the preprocessing prepare_data.py script from the SR3 model.


Model weights

The trained model parameters can be found in the experiments folders, the results that are used and visualized in the ipynb can be found in the folder on OneDrive. Some experiments are for training and some for testing only. The weights can be found in the /checkpoint/ folders. In the /results/ folder the superresolved images can be found in png format. These are loaded by the .ipynb for visualization and postprocessing.

Training:

  • Training for the [ux,uy,uz] case: sr_flame_ai_230911_185035
    • The checkpoints folder has many trained weights, the last one was used for inference: I1000000_E818
  • Training for the density case with channels [rho,uy,uz]: sr_flame_ai_230912_201811
    • Chekcpoint used for inference : I1200000_E1252

Testing (these don't have the weights, just the final results in png format):

  • Validation results with [ux,uy,uz]:sr_flame_ai_230912_130804
  • Test results with [ux,uy,uz]: sr_flame_ai_230912_175205
  • Test results with [density,ux,uy,uz]: sr_flame_ai_230913_094253

The model weights can be simply loaded in the json files (see any json files in the config directory), by setting resume_state to the folder containing the model weights. For example, config/sr_sr3_16_128_flame_rho_test.json contains the path to the last trained weights for inference on the test set with the [rho,uy,uz] model.

flameai_superres's People

Contributors

tschala avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.