Giter VIP home page Giter VIP logo

ctpelvic1k's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

ctpelvic1k's Issues

Dataset 2 untar error

When I tried to untar dataset2, the following error occurred. Does anyone run into this issue as well?

$ tar -xzvf CTPelvic1K_dataset2_data.tar.gz.0 CTPelvic1K_dataset2_data.tar.gz.1 CTPelvic1K_datas et2_data.tar.gz.2 CTPelvic1K_dataset2_data.tar.gz.3 CTPelvic1K_dataset2_data.tar.gz.4 CTPelvic1K_dataset2_data.tar.gz.5 CTPelvic1K_dataset2_data.tar.gz.6 CTPelvic1K_dataset2_data.tar.gz.7

tar: Truncated input file (needed 109287936 bytes, only 0 available)
tar: CTPelvic1K_dataset2_data.tar.gz.1: Not found in archive
tar: CTPelvic1K_dataset2_data.tar.gz.2: Not found in archive
tar: CTPelvic1K_dataset2_data.tar.gz.3: Not found in archive
tar: CTPelvic1K_dataset2_data.tar.gz.4: Not found in archive
tar: CTPelvic1K_dataset2_data.tar.gz.5: Not found in archive
tar: CTPelvic1K_dataset2_data.tar.gz.6: Not found in archive
tar: CTPelvic1K_dataset2_data.tar.gz.7: Not found in archive
tar: Error exit delayed from previous errors.

Train/val/test split and provided model weights

Hi,

Thank you for providing such an amazing dataset ;)

I would like to know how to compare with the provided nnUNet baseline. What's the exact dataset split? Could u provide the the data split-id correspondence so that we could compare our own method with your baseline model weights?

Cheers,
Jiancheng Yang

Regarding the relation of data between CLINIC and CLINIC-metal

Hi authors, thanks for the awesome work. I notice that the CLINIC and CLINIC-metal are collected in the pre-treatment and the post-treatment stages, respectively. I am wondering if there are paired data of the same patient across these two datasets? If so, could you kindly provide this information?

pretrained model giving mismatch issue

RuntimeError: Error(s) in loading state_dict for Generic_UNet:
size mismatch for conv_blocks_context.0.blocks.0.conv.weight: copying a param with shape torch.Size([30, 15, 3, 3]) from checkpoint, the shape in current model is torch.Size([30, 1, 3, 3]).

Different predictions between nnunet(original mic-dkfz) and nnunet(pelvic1k)

Is the nnunet that Pelvic1k use different from the original? Because I cannot get good result using the original. But when nnunet Pelvic1k , the segmentation is very good.

Just as the pictures shows. Both using 3D fullres structure.
image
image

And the training process:

nnunet(original)
nnunet_progress

nnunet(pelvic1k)
pelvic1k_progress

And the plan.pkl:

plans.zip

It seems that original nnunet can not tell the right and left. I'm not quite familiar with the nnunet. And reading from the begining just makes me very confusing.
By the way, I'm using nnUNetTrainerV2. But I don't think it should make such a difference.
Any clue is appreciated!

CERVIX metric low

I've downloaded the subdataset5 (CERVIX) , splited the test dataset out by the splits_final.pkl (fold 5).
And I downloaded the CTPelvic1K_Models.tar.gz.
I used the command_12 to run the predict, and command_16 to get the matrix.
And my mean DC is 0.84, mean HD is 18, whole DC is 0.87, whole HD is 59.
The mean DC in the paper is 0.972.
Is it normal with the 2d model and 3d cascade model, or is there something wrong with my testing method?

There are several data points that have a wrong physical direction

There are several data points that have a wrong physical direction:
(these are my file name but it pretty much maintain the original order)
../Data/dataset2_0003_4_325
../Data/dataset2_0460_5_325
../Data/dataset2_0480_2_324
../Data/dataset2_0616_4_325_hard
../Data/dataset2_0703_4_325
../Data/dataset2_0704_4_325

AttributeError: 'NoneType' object has no attribute 'maxsize'

After i run one fold of 3d_fullers or another ,but this error happened to it. I have looked for a lot of solutions, but nothing seems to help the.
I tried to use different version of batchgenerators but it didn't work,
I am stuck

finished prediction, now evaluating...
Exception ignored in: <function MultiThreadedAugmenter.del at 0x7faf34d97d30>
Traceback (most recent call last):
File "/home/cui/anaconda3/envs/wyc/lib/python3.9/site-packages/batchgenerators/dataloading/multi_threaded_augmenter.py", line 269, in del
File "/home/cui/anaconda3/envs/wyc/lib/python3.9/site-packages/batchgenerators/dataloading/multi_threaded_augmenter.py", line 239, in _finish
File "/home/cui/anaconda3/envs/wyc/lib/python3.9/multiprocessing/synchronize.py", line 338, in set
File "/home/cui/anaconda3/envs/wyc/lib/python3.9/multiprocessing/synchronize.py", line 297, in notify_all
AttributeError: 'NoneType' object has no attribute 'maxsize'
Exception ignored in: <function MultiThreadedAugmenter.del at 0x7faf34d97d30>
Traceback (most recent call last):
File "/home/cui/anaconda3/envs/wyc/lib/python3.9/site-packages/batchgenerators/dataloading/multi_threaded_augmenter.py", line 269, in del
File "/home/cui/anaconda3/envs/wyc/lib/python3.9/site-packages/batchgenerators/dataloading/multi_threaded_augmenter.py", line 239, in _finish
File "/home/cui/anaconda3/envs/wyc/lib/python3.9/multiprocessing/synchronize.py", line 338, in set
File "/home/cui/anaconda3/envs/wyc/lib/python3.9/multiprocessing/synchronize.py", line 297, in notify_all
AttributeError: 'NoneType' object has no attribute 'maxsize'

Discrepancy in COLONOG Dataset Mask Count: Stated 731 vs. Found 714

Thank you for your great work in providing such a large-scale annotated dataset for the pelvis.

While working with the datasets provided, I noticed a discrepancy in the count for the COLONOG dataset. According to both the repository documentation and the associated paper, the count for this dataset is stated as 731. However, after downloading and inspecting the data, I found that there are only 714 masks.
image

image

image

Could you please check it for me? I'd really appreciate it.

Best
Ziyan Huang

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.