Giter VIP home page Giter VIP logo

rare's Introduction

Regularization by denoising (RED) is an image reconstruction framework that uses an image denoiser as a prior. Recent work has shown the state-of-the-art performance of RED with learned denoisers corresponding to pre-trained convolutional neural nets (CNNs). In this work, we propose to broaden the current denoiser-centric view of RED by considering priors corresponding to networks trained for more general artifact-removal. The key benefit of the proposed family of algorithms, called regularization by artifact-removal (RARE), is that it can leverage priors learned on datasets containing only undersampled measurements. This makes RARE applicable to problems where it is practically impossible to have fully-sampled groundtruth data for training. We validate RARE on both simulated and experimentally collected data by reconstructing a free-breathing whole-body 3D MRIs into ten respiratory phases from heavily undersampled k-space measurements. Our results corroborate the potential of learning regularizers for iterative inversion directly on undersampled and noisy measurements. The supplementary material of this paper can be found here. The talk is available here.

visualpipline

How to run the code

Prerequisites for numpy-mcnufft

tqdm python 3.6
tensorflow 1.13 or lower
scipy 1.2.1 or lower
numpy v1.17 or lower
matplotlib v3.1.0

Prerequisites for torch-mcnufft

above prerequisites + pytorch 1.13 or lower

It is better to use Conda for installation of all dependecies.

Run the Demo

to demonstrate the performance of RARE with freath-breath 4D MRI, you can run the RARE by typing

$ python demo_RARE_np.py

or

$ python demo_RARE_torch.py

The per iteration results will be stored in the ./Results folder. The torch-mcnufft is a more efficient implementation using gpu backend. (Thanks wjgancn for his help in pytorch-mcnufft.)

Visual results of RARE

visualExamples

CNN model

The training code for artifact-to-artifact (A2A) convolutional neural network is coming soon. The pre-trained models are stored under the ./models folder. Feel free to download and test it.

Citation

If you find the paper useful in your research, please cite the paper:

@ARTICLE{Liu.etal2020,
  author={J. {Liu} and Y. {Sun} and C. {Eldeniz} and W. {Gan} and H. {An} and U. S. {Kamilov}},
  journal={IEEE Journal of Selected Topics in Signal Processing}, 
  title={RARE: Image Reconstruction using Deep Priors Learned without Ground Truth}, 
  year={2020},
  volume={14},
  number={6},
  pages={1088-1099},
  publisher={IEEE}
}

rare's People

Contributors

jiamingliu-jeremy avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.