Giter VIP home page Giter VIP logo

mt-net's People

Contributors

lyhkevin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

mt-net's Issues

About the execution of the code

Hello,

I've encountered some difficulty understanding how to execute the code even after reviewing the readme file. Could you provide a more concise explanation? Your assistance would be greatly appreciated.

I look forward to your help.

Thank you.

About data preprocessing.

作者你好,

我看到utils/preprocessing.py中将BraTS的TrainingData二八分作为了测试集和训练集,
想请教为什么不用BraTS提供的ValidationData做测试,
另外论文中描述为用70%的训练数据来训练,这里为什么二八分呢?

OpenCodes

I really appreciate your work and hope to learn more details through the code. I hope you can share the code.Thanks.

运行pretrain.py出错

运行pretrain.py报错
FileNotFoundError: [Errno 2] No such file or directory: './data/train/'

然后data目录没找到train文件夹,请问怎么解决呀?

About Fine-tuning Strategy in `finetune.py`

Hello,

Firstly, great work on the project!

I've been reviewing the finetune.py script and noticed something about the fine-tuning strategy. In the paper, it's mentioned that you "fine-tune the last six layers of the pre-trained encoder while freezing the others." I might be missing something, but in the code, the pretrained encoder (E) is updating all of its layers:

E = MAE_finetune(img_size=opt.img_size, patch_size=opt.mae_patch_size, embed_dim=opt.encoder_dim, depth=opt.depth,
                 num_heads=opt.num_heads, in_chans=1, mlp_ratio=opt.mlp_ratio)

I saw the gradients for FC_module are turned off:

for param in FC_module.parameters():
    param.requires_grad = False

The optimizer is initialized to optimize over parameters of both E and G:

params = list(E.parameters()) + list(G.parameters())
optimizer = torch.optim.Adam(params)

But I couldn't find any section in the code where the initial layers are frozen.

Could you please clarify if this fine-tuning strategy of only updating the last six layers is an essential step? I'm curious if I can reproduce your results without this specific strategy.

Thank you for your time and assistance!

About pretrain

发生异常: ValueError
cannot reshape array of size 0 into shape (5,200,200)
File "E:\MT-Net Data\MT-Net\utils\maeloader.py", line 98, in getitem
npy = np.load(self.images[subject])
File "E:\MT-Net Data\MT-Net\pretrain.py", line 42, in
for i,img in enumerate(train_loader):
ValueError: cannot reshape array of size 0 into shape (5,200,200)

总是pretrain一部分以后报这个错误,不知道是什么情况

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.