Giter VIP home page Giter VIP logo

ua-mt's People

Contributors

yulequan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

ua-mt's Issues

Does uncertainty help?

Hi @yulequan ,

I did an ablation study on the uncertainty.

image

Specifically, if we do not use the uncertainty to select the most certain targets and use all the voxels during each iteration, will the performance degrade?

To disable the uncertainty based proposal, I simply increase the threshold to 100, thus all the voxels will be used to guide the student learning.

threshold = (0.75+0.25*ramps.sigmoid_rampup(iter_num, max_iterations))*np.log(2)
mask = (uncertainty<threshold).float()
consistency_dist = torch.sum(mask*consistency_dist)/(2*torch.sum(mask)+1e-16)

# my modification
threshold = 100 #(0.75+0.25*ramps.sigmoid_rampup(iter_num, max_iterations))*np.log(2)

However, the results are weird. The performance does not degrade (even little improvements) without using uncertainty.

image

I also did paired T-test, but I didn't find significant differences (p>0.05) between using uncertainty and without uncertainty.

image

Could you help me to figure out what's wrong with my experiments?

The following are all my experiment results (code, trained model, logs...).
Download Link:https://pan.baidu.com/s/1tM6fc_hz3_LE23cLffnFBg
Password:5p1k

Regarding to the source code, I only change the default seeds to 12345.

Best regards,
Jun

Question about the preprocess of the LA dataset.

@yulequan
I have noticed that in the preprocessing of the LA dataset, a patch with the size [112,112,80] is cropped according to the label center, and 20-40 is entended along the x, y axis, 10-20 is extended along the z axis.
There is a question that in my opinion, the position of the left atrial should not be known if the sample is used as the unlabeled data in the preprocessing. I wonder if you have considered using the whole scans as unlabeled data.
I can not download the LA dataset, but I have tried your method on other dataset, it seems that UA-MT just works when I use the label centered occasion, or the performance is worse than the vnet_dp using the whole scans (also random crop).
By the way, could you tell me the original spacing and scan size of the LA dataset, and is it changed in your experiments?

the proposed method used in CT images

Hi, thanks for your sharing.
Did you do research in CT image segmentation using your proposed method? I try to run your coder on CT dataset but the performance is not very well.
Many tkanks,

Guidance about pipeline

Thank you for sharing your code. I have learned a lot from this project. I have some queries please answer them when available.

  1. You make the H5 file which contains "image" and "label" although is a label or unlabeled data. Although, you didn't use it later for unlabeled data. So what if I have only "image" for data and do not "label" for it. Datagenrator function will give an error and can you tell me how to solve it.
  2. I am trying on some other data taken from the hospital, based on your knowledge can you please guide what ratio of the label and unlabeled data can be used to get optimal performance . And can we get a dice score more than fully supervised learning with more unlabeled data or it will decrease the performance?

Pre training model

Thank you for sharing your code. I have learned a lot from this project. I have some queries please answer them when available.

Can you share your best pre trained model,my experiment cannot achieve the accuracy mentioned in your paper。

Implementation of Monte Carlo Dropout

Hello, @yulequan

I read your code carefully and uncertainty is a new idea for me.
I see your implementation of Monte Carlo Dropout like below:

T = 8 volume_batch_r = unlabeled_volume_batch.repeat(2, 1, 1, 1, 1) stride = volume_batch_r.shape[0] // 2 preds = torch.zeros([stride * T, 2, 112, 112, 80]).cuda() for i in range(T//2): ema_inputs = volume_batch_r + torch.clamp(torch.randn_like(volume_batch_r) * 0.1, -0.2, 0.2) with torch.no_grad(): preds[2 * stride * i:2 * stride * (i + 1)] = ema_model(ema_inputs) preds = F.softmax(preds, dim=1) preds = preds.reshape(T, stride, 2, 112, 112, 80) preds = torch.mean(preds, dim=0) #(batch, 2, 112,112,80) uncertainty = -1.0*torch.sum(preds*torch.log(preds + 1e-6), dim=1, keepdim=True)

I wonder if this is the most common way to implement uncertainty and mean teacher method?
I think your implementation is to add perturbation to inputs but not dropout to network although both way is to add regularization.

A small code detail problem

Hi, @yulequan
Thank you for opening the source code, I viewed the code carefully and had a small question.
consistency_dist = torch.sum(mask*consistency_dist)/(2*torch.sum(mask)+1e-16) consistency_loss = consistency_weight * consistency_dist loss = supervised_loss + consistency_loss
You can see that, in calculating consistency_dist, the sum of mask needs to be multiplied by 2. I'm curious why do you multiply this by 2 here?

Looking forward to your reply.
Best,
Jianqiang Ma

Visualization tool in your paper

Hi,

Thanks so much for your qualitative approach.
I'm new to the 3D semi-supervised segmentation, and I'm wondering what visualization tool are you using in the paper?

Cheers,

about noisy labels

Dear author,
Isn't this work about semi-supervised medical image segmentation? Why does the training set include noisy labels, and how are they handled?

Predictive Entropy and Uncertainty

Thank you for the great code~I have a theoretical question to ask.
What is the significance of averaging the prediction results for T forward propagation?

This is a classification problem in which uncertainty can be calculated by predictive entropy without T forward propagation.
Since predictive entropy itself can represent uncertainty, I wanted to confirm the significance of averaging over T forward propagations.

Why is the student model stored instead of a teacher model?

Dear @yulequan
Another question, why is the student model stored instead of a teacher model in the training phase. In the paper of "Mean Teacher", I found that the teacher model has better performance.
if iter_num % 1000 == 0: save_mode_path = os.path.join(snapshot_path, 'iter_' + str(iter_num) + '.pth') torch.save(model.state_dict(), save_mode_path) logging.info("save model to {}".format(save_mode_path))
Thanks,
Jianqiang Ma

Question on dynamic results

Dear @yulequan ,

Thanks for sharing the great code. It's very clear and out-of-the-box.

Question on "dynamic" results

My friend and I run the code (without any modification) and get the following results.
The results are a little diverse. Some metrics can be reproduced, some metrics (red) are even better than the paper reported results, but some metrics (blue) are degraded.

Could you share your insights on these diverse results? and what could be the possible reason for the degraded results?

We also try to re-run the code on the local server, however, the results are similar.

Results

A minor bug

Here, the case folder name is missed, so all the saved results have the same name and will be overwritten during saving.

UA-MT/code/test_util.py

Lines 28 to 31 in da31df5

if save_result:
nib.save(nib.Nifti1Image(prediction.astype(np.float32), np.eye(4)), test_save_path + id + "_pred.nii.gz")
nib.save(nib.Nifti1Image(image[:].astype(np.float32), np.eye(4)), test_save_path + id + "_img.nii.gz")
nib.save(nib.Nifti1Image(label[:].astype(np.float32), np.eye(4)), test_save_path + id + "_gt.nii.gz")

Finally, I really appreciate that you make the code publicly available. The code is well written, it would be great learning materials for me.

Looking forward to your reply.
Best,
Jun

Lack of Uncertainty normalization

Hi, thanks for your greak work!

I think there are some problems in the code of uncertainty computation. As stated in your paper, the uncertainty threshold should ramp up from 0.75 to 1, right? But in my experiement, the number of uncertainty easily comes up to 3.

I think we should renormalize the uncertainty to 0~1 under the code of uncertainty = -1.0 * torch.sum(preds * torch.log(preds + 1e-6), dim=1, keepdim=True)

Would you like to fix about this? Thank you!

Some question about data preprocessing.

This project is a great framework. But I have some question to ask.

  1. I notice that in preprocessing phase, you crop each image into a box (larger than patch size(112, 112, 80)) according to label mask and get a LA region. But later, in Dataset LAHeart, you add a random crop transform to crop image to (112, 112, 80) again. Why this? I think just croping to (112, 112, 80) and remove the random crop in transform is enough. Random crop may lose some LA information.
  2. In the testing phase, why do you make several inference on patches of one image? If croping to (112, 112, 80) in preprocessing , I think it will be easier to make test with no need for patching. Besides, I think this patching and average method used in testing may act like "ensemble learning", which will improve performance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.