Giter VIP home page Giter VIP logo

magnet's Introduction

MAgNet: Mesh-Agnostic Neural PDE Solver (Neurips 2022)

This is the official repository to the paper "MAgNet: Mesh-Agnostic Neural PDE Solver" by Oussama Boussif, Dan Assouline, and professors Loubna Benabbou and Yoshua Bengio.

In this paper, we aim to address the problem of learning solutions to Partial Differential Equations (PDE) while also generalizing to any mesh or resolution at test-time. This effectively enables us to generate predictions at any point of the PDE domain.

MAgNet

Predictions

Citation

To cite our work, please use the following bibtex:

@inproceedings{magnet_neurips_2022,
 author = {Boussif, Oussama and Bengio, Yoshua and Benabbou, Loubna and Assouline, Dan},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {S. Koyejo and S. Mohamed and A. Agarwal and D. Belgrave and K. Cho and A. Oh},
 pages = {31972--31985},
 publisher = {Curran Associates, Inc.},
 title = {MAgNet: Mesh Agnostic Neural PDE Solver},
 url = {https://proceedings.neurips.cc/paper_files/paper/2022/file/cf4c7ee0734cdfe09a099cf6cd7b117a-Paper-Conference.pdf},
 volume = {35},
 year = {2022}
}

Requirements

Start by installing the required modules:

pip install -r requirements.txt

Dataset

The dataset is available for download at the following link: magnet dataset and contains two folders: 1d and 2d for the 1D and 2D PDE datasets respectively.

The structure of the 1D dataset is as follows:

├───E1
│   ├───irregular
│   │       CE_test_E1_graph_100.h5
│   │       CE_test_E1_graph_200.h5
│   │       CE_test_E1_graph_40.h5
│   │       CE_test_E1_graph_50.h5
│   │       CE_train_E1_graph_30.h5
│   │       CE_train_E1_graph_50.h5
│   │       CE_train_E1_graph_70.h5
│   │       
│   └───regular
│           CE_test_E1_100.h5
│           CE_test_E1_200.h5
│           CE_test_E1_40.h5
│           CE_test_E1_50.h5
│           CE_train_E1_50.h5
│           
├───E2
│   └───regular
│           CE_train_E2_50.h5
│           CE_test_E2_100.h5
│           CE_test_E2_200.h5
│           CE_test_E2_40.h5
│           CE_test_E2_50.h5
│           
└───E3
    └───regular
            CE_test_E3_100.h5
            CE_test_E3_200.h5
            CE_test_E3_40.h5
            CE_test_E3_50.h5
            CE_train_E3_50.h5

Each file is formatted as follows: CE_{mode}_{dataset}_{resolution}.h5 where mode can be train or test and dataset can be E1, E2 or E3 and resolution denotes the resolution of the dataset. The folder regular contains simulations on a regular grid and irregular contains simulations on an irregular grid.


For the 2D dataset, it is structured as follows:

├── B1
│   ├── burgers_test_B1_128.h5
│   ├── burgers_test_B1_256.h5
│   ├── burgers_test_B1_32.h5
│   ├── burgers_test_B1_64.h5
│   ├── burgers_train_B1_128.h5
│   ├── burgers_train_B1_256.h5
│   ├── burgers_train_B1_32.h5
│   ├── burgers_train_B1_64.h5
│   ├── concentrated
│   │   ├── burgers_train_irregular_B1_128.h5
│   │   ├── burgers_train_irregular_B1_256.h5
│   │   ├── burgers_train_irregular_B1_512.h5
│   │   └── burgers_train_irregular_B1_64.h5
│   └── uniform
│       ├── burgers_train_irregular_B1_128.h5
│       ├── burgers_train_irregular_B1_256.h5
│       ├── burgers_train_irregular_B1_512.h5
│       └── burgers_train_irregular_B1_64.h5
└── B2
    ├── burgers_test_B2_128.h5
    ├── burgers_test_B2_256.h5
    ├── burgers_test_B2_32.h5
    ├── burgers_test_B2_64.h5
    ├── burgers_train_B2_128.h5
    ├── burgers_train_B2_256.h5
    ├── burgers_train_B2_32.h5
    └── burgers_train_B2_64.h5

Each file is formatted as follows: burgers_{mode}_{dataset}_{resolution}.h5 where mode can be train or test and dataset can be B1 or B2 and resolution is the resolution of the dataset. The folder concentrated contains simulations on an irregular grid where points are sampled around a region in the grid while uniform contains simulations on a uniform irregular grid.

Experiments

We use hydra for config management and command line parsing so it's straightforward to run experiments using our code-base. Below is an example command for training the MAgNet[CNN] model on the E1 dataset for 250 epochs on four GPUs:

python run.py \
model=magnet_cnn \
name=magnet_cnn \
datamodule=h5_datamodule_implicit \
datamodule.train_path={train_path} \
datamodule.val_path={val_path}' \
datamodule.test_path={test_path} \
datamodule.nt_train=250 \
datamodule.nx_train={train_resolution} \
datamodule.nt_val=250 \
datamodule.nx_val={val_resolution} \
datamodule.nt_test=250 \
datamodule.nx_test={test_resolution} \
datamodule.samples=16 \
model.params.time_slice=25 \
trainer.max_epochs=250 \
trainer.gpus=4 \
trainer.strategy='ddp'

You can find the relevant scripts that were used to run experiments under the scripts folder.

magnet's People

Contributors

jaggbow avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

magnet's Issues

0 bytes 2d dataset

Hello, I would like to download your code and dataset to replicate the research.

However, for the 2D dataset, most of the data files in the download link have a size of 0, making it impossible to reproduce.

Is it possible for you to upload the data?

Thanks

Question about test process

Dear Author,

First of all, thank you for your response last time.

I am writing to seek clarification on a specific aspect of your research. It appears to me that the proposed Magnet or MPNN methodologies require not just the PDE initial conditions as input, but also the ground truth data (PDE solutions) from the start up to a certain time window for making accurate predictions. Could you kindly confirm if my understanding is correct?

Moreover, I am keen to know if there have been attempts to conduct predictions based solely on initial conditions. I understand that in the case of Neural operator types, predictions are made for a specific time using only initial conditions. Could you elaborate on why there might be a difference in prediction methodologies between the Neural operator and the others?

Your insights will be greatly appreciated.
Kind regards,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.