Giter VIP home page Giter VIP logo

Comments (16)

tiangexiang avatar tiangexiang commented on June 14, 2024 1

Thank you for your interest in our work! Unfortunately, we don't plan to release the trained model, since we have refactored the code quite a lot and the trained model cannot be loaded directly in the current repo due to inconsistent names/structures. Note that our method is dataset-specific, such that the trained model on one dataset can not be used to denoise any other datasets.
However, we do provide the denoised data for all the four datasets we presented in the paper: https://figshare.com/s/6275f40c32f67e3b6083
Hope this helps :)

from ddm2.

tiangexiang avatar tiangexiang commented on June 14, 2024 1

Oh, now I get what you mean! Yes, we do require multiple 2D observations of the same underlying 2D slice for unsupervised learning. The difference between Noise2Self and DDM2 is the definition and scope of data point: In Noise2Self, a data point is usually referred to as a single pixel, while in DDM2 a data point is actually a 2D slice. In this way, Noise2Self can achieve denoising on the 2D noisy image itself (since it contains many pixels), and of course, masking is required to make this strategy effective. DDM2, on the other hand, requires multiple 2D slices as inputs, and no masking is needed. Hope this clarifies :)

from ddm2.

tiangexiang avatar tiangexiang commented on June 14, 2024 1

@gzliyu @VGANGV Sorry I just saw these messages! I think one potential reason is from model loading (either in stage 3 training or inference). Did you specify the correct stage 3 model checkpoint before running inference? Can you please provide some validation results during the training process (for both stage 1 and 3)?

from ddm2.

chenyzzz avatar chenyzzz commented on June 14, 2024

Thank you for your interest in our work! Unfortunately, we don't plan to release the trained model, since we have refactored the code quite a lot and the trained model cannot be loaded directly in the current repo due to inconsistent names/structures. Note that our method is dataset-specific, such that the trained model on one dataset can not be used to denoise any other datasets. However, we do provide the denoised data for all the four datasets we presented in the paper: https://figshare.com/s/6275f40c32f67e3b6083 Hope this helps :)

Thanks for your answer, it was very helpful! I have a few more questions. The paper notes that currently DDM2 can only be used with certain data sets (those 4 brains?). Can I put it in if I want to use my CARDIAC IMAGE data set? If my code is weak, is it impossible to adjust the code? I'm sorry I have so many questions. Thanks again for your reply! Thank you!!

from ddm2.

tiangexiang avatar tiangexiang commented on June 14, 2024

Hi, yes it is absolutely fine to use DDM2 on different datasets. However, you have to make sure that the dataset you are using is still a 4D volume [H x W x D x T], while T indicates the number of different observations of the same 3D volume. Then I believe you can train DDM2 on a new dataset seamlessly.

from ddm2.

mariusarvinte avatar mariusarvinte commented on June 14, 2024

Hi, yes it is absolutely fine to use DDM2 on different datasets. However, you have to make sure that the dataset you are using is still a 4D volume [H x W x D x T], while T indicates the number of different observations of the same 3D volume. Then I believe you can train DDM2 on a new dataset seamlessly.

Could you please comment on how can one reproduce the experiments with n=1 in the Appendix of the paper (T=1 in your reply) with this codebase? What should the X and condition signals returned by the dataloader be in this case?

from ddm2.

tiangexiang avatar tiangexiang commented on June 14, 2024

Hi, are you referring to Figure 11 of results on synthesis noise with n=1? If so, this experiment indicates the results of using only 1 prior slice as input (while in the main paper, we usually used 3 prior slices,). This does not necessarily require T to be 1 as well. In fact, I don't think any unsupervised algorithms right now can handle T = 1.

from ddm2.

mariusarvinte avatar mariusarvinte commented on June 14, 2024

Hi, are you referring to Figure 11 of results on synthesis noise with n=1? If so, this experiment indicates the results of using only 1 prior slice as input (while in the main paper, we usually used 3 prior slices,). This does not necessarily require T to be 1 as well. In fact, I don't think any unsupervised algorithms right now can handle T = 1.

Thanks for the quick reply, sorry for being a bit vague at first.

Yes, I was talking about the result in Figure 11 and it seems I was mistaking n=1 for T=1 - but if I understand correctly, you just applied it to 2D data instead of 3D data, and still require multiple noisy observations of the same 2D clean sample?

My general understanding is that, for example, Noise2Self is designed to work with T=1 (a single noisy observation of each datapoint). Citing from the Introduction in Noise2Self (https://arxiv.org/pdf/1901.11365.pdf, Page 1):

In this paper, we propose a framework for blind denoising based on self-supervision. [...] 
This allows us to learn denoising functions from single noisy measurements of each object, with performance close to that of supervised methods.

I was wondering if your method/code would allow one to do the same.

from ddm2.

chenyzzz avatar chenyzzz commented on June 14, 2024

@tiangexiangHello! I trained the Stanford HARDI dataset according to the steps. The images generated after denoising in stage 3 looked good during the training, but the effect I got after using denoising.py was very strange. I don't know why. Did I do something wrong?
Actually, I don't quite understand the meaning of this passage. It is the fourth point about training configuration requirements: After Stage II finished, the state file (recorded in the previous step) needs to be specified at 'initial_stage_file' for both 'train' and 'val' in the 'datasets' section. Can you explain it again?
I am so sorry for my many questions. Thank you again! Best wishes!

from ddm2.

tiangexiang avatar tiangexiang commented on June 14, 2024

Hi! Sorry for the unclearness, after Stage II is finished, the generated '.txt' file should be specified at the 'stage2_file' variable in the config file, which is the last variable in the file. It shouldn't be specified at 'initial_stage_file' for both 'train' and 'val' in the 'datasets' section. Sorry this is an outdated statement and we will update it accordingly.

Note that the 'stage2_file' is needed for both Stage III training and denoising. And please make sure the trained model is loaded properly when denoising!

from ddm2.

gzliyu avatar gzliyu commented on June 14, 2024

Hi! Sorry for the unclearness, after Stage II is finished, the generated '.txt' file should be specified at the 'stage2_file' variable in the config file, which is the last variable in the file. It shouldn't be specified at 'initial_stage_file' for both 'train' and 'val' in the 'datasets' section. Sorry this is an outdated statement and we will update it accordingly.

Note that the 'stage2_file' is needed for both Stage III training and denoising. And please make sure the trained model is loaded properly when denoising!

A kind note to update "After Stage II finished, the state file (recorded in the previous step) needs to be specified at 'initial_stage_file'"

from ddm2.

gzliyu avatar gzliyu commented on June 14, 2024

Hi! Did you solve this problem, my inference on hardi150 looks weird like this
截屏2023-06-23 11 16 28
@tiangexiang @chenyzzz

from ddm2.

VGANGV avatar VGANGV commented on June 14, 2024

Hi! Did you solve this problem, my inference on hardi150 looks weird like this 截屏2023-06-23 11 16 28 @tiangexiang @chenyzzz

@gzliyu 我遇到了一样的问题,去噪结果同样非常奇怪。请问您解决了吗?

from ddm2.

VGANGV avatar VGANGV commented on June 14, 2024

@tiangexiang Thank you Tiange! I realized that I forgot to update the config file before denoising. After I changed the "resume_state" of "noise_model" in the config file to the Stage 3 model, I got the normal denoising result.

from ddm2.

Related Issues (15)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.