ycwu1997 / mc-net Goto Github PK
View Code? Open in Web Editor NEWOfficial Code for our MedIA paper "Mutual Consistency Learning for Semi-supervised Medical Image Segmentation" (ESI Highly Cited Paper)
License: MIT License
Official Code for our MedIA paper "Mutual Consistency Learning for Semi-supervised Medical Image Segmentation" (ESI Highly Cited Paper)
License: MIT License
There are two num_classes in the CT slice of my own dataset, but these two categories will not appear in the same slice, that is, each slice only shows one category. I found that MEAN_dice and HD95 were displayed as 0 in most of the time during the training, I wonder if they are related to my dataset itself. Experiments with the ACDC dataset are fine
Sorry to raise issues again.
When I try to train the Vnet with all 80 labels, I get this error message:
Traceback (most recent call last):
File "/home/zhaoyan/MC-Net/./code/train_mcnet_3d.py", line 111, in <module>
batch_sampler = TwoStreamBatchSampler(labeled_idxs, unlabeled_idxs, args.batch_size, args.batch_size-labeled_bs)
File "/home/zhaoyan/MC-Net/code/dataloaders/dataset.py", line 327, in __init__
assert len(self.secondary_indices) >= self.secondary_batch_size > 0
AssertionError
The reason is that zero is passed to the init function of TwoStreamBatchSampler class. Am I using the wrong method to train the model?
Hello author, I am a first-year graduate student at Zhejiang Sci-Tech University. Currently, I am working on semi-supervised medical image segmentation. While replicating your network, I have encountered difficulty in obtaining the complete CT-82 dataset. I could only access around 80 cases. In order to ensure the rigor of my experiments, I would like to request your assistance in obtaining the complete dataset.
After I commit the loss_seg, there is a new error message:
Traceback (most recent call last): File "/home/zhaoyan/MC-Net/./code/train_mcnet_3d.py", line 147, in <module> loss_seg_dice += dice_loss(y_prob[:,1,...], label_batch[:labeled_bs,...] == 1) File "/home/zhaoyan/MC-Net/code/utils/losses.py", line 5, in Binary_dice_loss intersection = 2 * torch.sum(predictive * target) + ep RuntimeError: The size of tensor a (2) must match the size of tensor b (112) at non-singleton dimension 1
Thank you for your work and sharing the code. But I have a question about data preprocessing.
The papersays that "we cropped the 3D samples according to the ground truth, with enlarged margins i.e. [10 ∼ 20, 10 ∼ 20, 5 ∼ 10] or [25, 25, 25] voxels on LA or PancreasCT, respectively", but your preprocessing code just cropped the samples on the x axis and y axis.
Why they are different? Which method is used to produce the performance reported in the paper, like 91.07 on LA and 79.37 on PancreasCT?
When reproducing the experiments of MC-Net on the Pancreas dataset, I used your data preprocessing file. Since I downloaded the data in the .nii.gz format, I only used the latter part of the code in the Pre-processing.ipynb notebook. After successfully modifying the code for preprocessing, I found that the average DICE score fluctuated around 0.2 to 0.1 during training. I wanted to ask if you have encountered such a situation, or if it is possible for you to provide the preprocessed Pancreas dataset? My email is [email protected].
您好,我对您的MC-Net工作非常感兴趣。请问您能提供一下这个Pancreas的数据吗?
Hi. I read your paper and code and in the paper it is mentioned that for ACDC dataset you tried to solve 3 class segmentation problem. But in the code I see num_classes=4, did try to understand why it's 4 not 3 couldn't. Can you please elaborate? Thank you.
Hi, I want to test mcnet2d model on my in-house dataset and I have one question regarding of use of unlabeled data. Code was running successfully with ACDC data because all images had labels but when using my in-house dataset it throws error because unlabeled data in my case doesn't have labels. Question is would it be ok if I use random labels for unlabeled data? Thanks
Thanks for sharing the code. I have a question about the parameter quantity in the MC-Net+. The backbone of the MC-Net+ is the Vet, and MC-Net+ has one encoder and three decoders (by reading your code I found these decoders do not share the weights). In Tab. 2, you said the parameter quantity of both the Vnet and MC-Net+ is 9.44, and the parameter quantity of Multi-scale MC-Net+ is 5.88. Why do the Vnet and MC-Net+ have the same parameter quantity? Why does the Multi-scale MC-Net+ have less parameter quantity? Could you explain about this? Thank you.
I get this error message when I use this python ./code/train_mcnet_3d.py --dataset_name LA --model vnet --labelnum 8 --gpu 1,2 --temperature 0.1
command to train the model :
Traceback (most recent call last):
File "/home/zhaoyan/MC-Net/./code/train_mcnet_3d.py", line 146, in <module>
loss_seg += F.cross_entropy(y[:labeled_bs], label_batch[:labeled_bs])
File "/home/zhaoyan/anaconda3/envs/VNet/lib/python3.9/site-packages/torch/nn/functional.py", line 3026, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: Expected floating point type for target with class probabilities, got Long
Thanks for your great work.
The code may have a little mistake in the 'train_mcnet_3d.py' file, line 163. The 'loss_seg' ( line 146) is not count in the backward function. Or it just not take part in it? Am I missing something here?
As I'm new in this field, I can't prepare the data for the code to run. Could you offer a more detailed guideline for it? Great thanks
Hello,
Thanks for sharing your code.
Recently I try to read your code
If I did not misunderstood the code,
then for 2D, the length of dataloader is 150,the max_iteration is 30000,then the epochs is 200.
That means consistency_weight can only reach 0.1 in your final epoch 200. During training it is less than 0.1 and even less than 0.05 in the first half of the epoch.
So did it really influence the result?the total loss is mostly or even all influenced by the dice loss
I wonder whether if you can just get the best_performance before it works!
Thanks for your reply
有两个问题想麻烦向您请教一下。问题1:您在胰腺分割实验部分提出了Multi-scale MC-Net+方法,不知能否麻烦您分享更多关于该算法的细节(如深监督的层数、是否用了Luo算法中的对于无标签数据的损失函数等等)。问题2:我尝试使用了您文中的互一致性算法,发现它效果很好,不知能否麻烦您推荐些功能类似于sharpening的函数(或相关资料)。期待您的回复,十分感谢~
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.