Giter VIP home page Giter VIP logo

biomedai-ucsc / inversesr Goto Github PK

View Code? Open in Web Editor NEW
47.0 2.0 7.0 38.39 MB

[Early Accepted at MICCAI 2023] Pytorch Code of "InverseSR: 3D Brain MRI Super-Resolution Using a Latent Diffusion Model"

Home Page: http://arxiv.org/abs/2308.12465

License: Apache License 2.0

Shell 1.84% Python 98.16%
inverse-problems latent-diffusion-models mri-super-resolution computer-vision deep-learning diffusion-models super-resolution

inversesr's Introduction

title

We have developed an unsupervised technique for MRI super-resolution. We leverage a recent pre-trained Brain LDM for building powerful image priors over T1w brain MRIs. Our method is capable of being adapted to different settings of MRI SR problems at test time. Our method try to find the optimal latent representation $z^∗$ in the latent space of the Brain LDM, which could be mapped to represent the SR MRI $G(z^∗)$.

This gif shows the image space of the gradual optimization process when InverseSR finds the optimal latent representation $z^∗$.

Install Requirements

pip install -r requirements.txt

Running InverseSR

We have given an example of ground truth high-resolution MRI ./inputs/ur_IXI022-Guys-0701-T1.nii.gz. The code of generating low-resolution MRI is contained. Please download the Brain LDM parameters ddpm and decoder from here into the InverseSR folder. Commands and parameters to run InverseSR can be found in job_script/InverseSR(ddim).sh and job_script/InverseSR(decoder).sh file.

Data Preparation

!! This model needs to be run on GPU/CPUs with at least 80GB of memory

You can find the necessary files for running the code here

inversesr's People

Contributors

jueqiw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

inversesr's Issues

Missing documentation

Hi, thanks for sharing this interesting work!

I noticed that the README file seems incomplete, i.e., the torchvision version is missing and sections like "Data Preparation" are empty. Can we expect some update of the documentation?

Best
Jakub

pretrained

const.py文件中,import os
from pathlib import Path

Use environment variables to auto-detect whether we are running an a Compute Canada cluster:

Thanks to https://github.com/DM-Berger/unet-learn/blob/master/src/train/load.py for this trick.

COMPUTECANADA = False
TMP = os.environ.get("SLURM_TMPDIR")

if TMP:
COMPUTECANADA = True

if COMPUTECANADA:
INPUT_FOLDER = Path(str(TMP)).resolve() / "work" / "inputs"
MASK_FOLDER = Path(str(TMP)).resolve() / "work" / "inputs" / "masks"
PRETRAINED_MODEL_FOLDER = Path(str(TMP)).resolve() / "work" / "trained_models"
PRETRAINED_MODEL_DDPM_PATH = (
Path(str(TMP)).resolve() / "work" / "trained_models" / "ddpm"
)
PRETRAINED_MODEL_VAE_PATH = (
Path(str(TMP)).resolve() / "work" / "trained_models" / "vae"
)
PRETRAINED_MODEL_DECODER_PATH = (
Path(str(TMP)).resolve() / "work" / "trained_models" / "decoder"
)
PRETRAINED_MODEL_VGG_PATH = (
Path(str(TMP)).resolve() / "work" / "trained_models" / "vgg16.pt"
)
OUTPUT_FOLDER = Path(str(TMP)).resolve() / "work" / "outputs"
else:
INPUT_FOLDER = Path(file).resolve().parent.parent.parent / "data" / "IXI"
MASK_FOLDER = Path(file).resolve().parent.parent / "masks"
OASIS_FOLDER = Path(file).resolve().parent.parent.parent / "data" / "OASIS"
PRETRAINED_MODEL_FOLDER = (
Path(file).resolve().parent.parent.parent / "data" / "trained_models"
)这些预训练的模型都在哪里啊,数据在哪啊?

Code Relese

Hi, this is an interesting work. By the way, when do you plan to release the code?

VGG16 Weights

Hi, I'm wondering if you have the vgg16.pt file as I cannot find it in the google drive or is it fine to use any vgg16 weights online? Thank you in advance!

Question about the autocoder and training data

Hi, there is no details about the encoder and decoder in the paper and code. I would like to ask what the specific model structure of the autocoder? And what's the information of training data, do they include the super resolution ground truth? Looking forward to your reply, thanks~

Hardware requirements

Hi!
I'm facing constant memory issues with running the ddim pipeline on my CPU, so I am wondering, what hardware did you run your experiments on? Most importantly, what was your RAM memory size?

Thanks for all previous answers
Jakub

Sharing of precomputed latent vectors/stats

Hi!
Would you mind sharing latent vectors file (latent_vector_ddpm_samples_100000.pt) or just the precomputed stats, that serve as the initialization of the latent code for the InverseSR(decoder) pipeline?

Thanks in advance
Jakub

Pretrained models

@jueqiw What are the pre trained vae and vgg16 models being used here? There are no files in the drive. Is the files somewhere or its being trained by yourself?

About the tested IXI images

Hello, author of InverseSR.
We are trying to utilize your nice work as our baseline in our paper for SR.
Due to the large resource requirement, we just want to use the same test images and then compare the evaluated PSNR, .. and other factors written in your paper.
I wonder the test images are the list below ( I saw it on your github code).
Thank you so much for your nice work.

image

Incomplete code

Dear Jueqi,

Thanks for sharing this work! However, I noticed that some essential files are missing from the code repository. Specifically, the readme file, dataloader file, and IXI_T1_069.pth file are not included. I would greatly appreciate it if you could kindly upload the complete code for InverseSR, including all the necessary files.

Thank you very much for your attention to this matter. Looking forward to your positive response.

Best regards,
Adele

Mask files

Hi!
Would you mind sharing the mask files used for acquiring 4mm and 8mm image slices?

Thanks in advance
Jakub

I am confused about the corruption_function

In the paper, it introduce a corruption function that generates masks for non-acquired slices, enabling our method to in-paint the missing slices. For instance, on 1 × 1 × 4mm3 undersampled volumes, we create masks for three slices every four slices on the generated HR 1 × 1 × 1mm3 volumes.

but I cannot find it in code. I do found a downsample function though, but it downsamples all the planes.
`class ForwardDownsample(ForwardAbstract):
def init(self, factor):
self.factor = factor

# resolution of input x can be anything, but aspect ratio should be 1:1
def __call__(self, x):
    x_down = F.interpolate(
        x,
        scale_factor=1 / self.factor,
        mode="trilinear",
        recompute_scale_factor=True,
        align_corners=False,
    )  # BCHW
    return x_down`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.