Giter VIP home page Giter VIP logo

mt-net's Introduction

MT-Net

We provide Pytorch implementations for our paper Multi-scale Transformer Network for Cross-Modality MR Image Synthesis (IEEE TMI) by Yonghao Li, Tao Zhou, Kelei He, Yi Zhou, and Dinggang Shen.

1. Introduction

We propose MT-Net to leverage to take advantage of both paired and unpaired data in MRI synthesis.


Figure 2. An overview of the proposed MT-Net.

Preview:

Our proposed methods consist of two main components under two different settings:

  • Edge-MAE (Self-supervised pre-training with image imputation and edge map estimation).

  • MT-Net (Cross-modality MR image synthesis)

Note that our pre-trained Edge-MAE can be utilized for various downstream tasks, such as segmentation or classification.


Figure 2. Example of imputed image and estimated edge maps from the BraTS2020 dataset.

2. Getting Started

  • Installation

    Install PyTorch and torchvision from http://pytorch.org and other dependencies. You can install all the dependencies by

    pip install -r requirements.txt
  • Dataset Preparation

    Download BraTS2020 dataset from kaggle. The file name should be ./data/archive.zip. Unzip the file in ./data/.

  • Date Preprocessing

    After preparing all the data, run the ./utils/preprocessing.py to normalize the data to [0,1] and crop out an image of size 200×200 from the center.

  • Pre-training

    To pre-train our Edge-MAE, run pretrain.py. You may change the default settings in the ./options/pretrain_options.py. For instance, increase num_workers to speed up fine-tuning. The weights will be saved in ./weight/EdgeMAE/. You can also use the pre-trained checkpoints of Edge-MAE in the ./weight/EdgeMAE/.

  • Fine-tuning

    To fine-tune our MT-Net, run Finetune.py. You may change the default settings in the ./options/finetune_options.py, especially the data_rate option to adjust the amount of paired data for fine-tuning. Besides, Besides, you can increase num_workers to speed up fine-tuning. The weights will be saved in ./weight/finetuned/. Note that for MT-Net, the input size must be 256×256.

  • Test

    When fine-tuning is completed, the weights of Edge-MAE and MT-Net will be saved in ./weight/finetune/. You can change the default settings in the ./options/test_options.py. Then, run test.py, and the synthesized image will be saved in ./snapshot/test/, and can obtain the PSNR, SSIM, and NMSE values.

3. Mindspore

We also provide Mindspore implementations for our paper, which is a new open source deep learning training/inference framework.

  • Installation

    Install mindspore from https://www.mindspore.cn/ and other dependencies. Unzip ./mindspore/mindcv.zip, an open-source toolbox for computer vision research.
  • Pre-training

    After preparing all the data, run ./mindspore/pretrain.py to pre-train our EdgeMAE. You may change the default settings in ./mindspore/pretrain.py.
  • Fine-tuning

    When pre-training is completed, run ./mindspore/finetune.py. You may change the default settings in ./mindspore/finetune.py.
  • Test

    When fine-tuning is completed, run ./mindspore/test.py. You may change the default settings in ./mindspore/test.py.

4. Citation

@ARTICLE{10158035,
  author={Li, Yonghao and Zhou, Tao and He, Kelei and Zhou, Yi and Shen, Dinggang},
  journal={IEEE Transactions on Medical Imaging}, 
  title={Multi-scale Transformer Network with Edge-aware Pre-training for Cross-Modality MR Image Synthesis}, 
  year={2023},
  volume={},
  number={},
  pages={1-1},
  doi={10.1109/TMI.2023.3288001}}

5. References

mt-net's People

Contributors

lyhkevin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

mt-net's Issues

About pretrain

发生异常: ValueError
cannot reshape array of size 0 into shape (5,200,200)
File "E:\MT-Net Data\MT-Net\utils\maeloader.py", line 98, in getitem
npy = np.load(self.images[subject])
File "E:\MT-Net Data\MT-Net\pretrain.py", line 42, in
for i,img in enumerate(train_loader):
ValueError: cannot reshape array of size 0 into shape (5,200,200)

总是pretrain一部分以后报这个错误,不知道是什么情况

运行pretrain.py出错

运行pretrain.py报错
FileNotFoundError: [Errno 2] No such file or directory: './data/train/'

然后data目录没找到train文件夹,请问怎么解决呀?

OpenCodes

I really appreciate your work and hope to learn more details through the code. I hope you can share the code.Thanks.

About Fine-tuning Strategy in `finetune.py`

Hello,

Firstly, great work on the project!

I've been reviewing the finetune.py script and noticed something about the fine-tuning strategy. In the paper, it's mentioned that you "fine-tune the last six layers of the pre-trained encoder while freezing the others." I might be missing something, but in the code, the pretrained encoder (E) is updating all of its layers:

E = MAE_finetune(img_size=opt.img_size, patch_size=opt.mae_patch_size, embed_dim=opt.encoder_dim, depth=opt.depth,
                 num_heads=opt.num_heads, in_chans=1, mlp_ratio=opt.mlp_ratio)

I saw the gradients for FC_module are turned off:

for param in FC_module.parameters():
    param.requires_grad = False

The optimizer is initialized to optimize over parameters of both E and G:

params = list(E.parameters()) + list(G.parameters())
optimizer = torch.optim.Adam(params)

But I couldn't find any section in the code where the initial layers are frozen.

Could you please clarify if this fine-tuning strategy of only updating the last six layers is an essential step? I'm curious if I can reproduce your results without this specific strategy.

Thank you for your time and assistance!

About the execution of the code

Hello,

I've encountered some difficulty understanding how to execute the code even after reviewing the readme file. Could you provide a more concise explanation? Your assistance would be greatly appreciated.

I look forward to your help.

Thank you.

About data preprocessing.

作者你好,

我看到utils/preprocessing.py中将BraTS的TrainingData二八分作为了测试集和训练集,
想请教为什么不用BraTS提供的ValidationData做测试,
另外论文中描述为用70%的训练数据来训练,这里为什么二八分呢?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.