assassint2017 / miccai-lits2017 Goto Github PK
View Code? Open in Web Editor NEWliver segmentation using deep learning
liver segmentation using deep learning
I randomly split the train dataset (131 cases) to two no-overlap subset, which are train set (105 cases) and test (26 cases) set. When I finshed train the network and test it on the test set (26 cases). I obtained a result that dice per case is 0.932. It is lower than your result (0.957).
Most importantly, I found that the dice coefficient on volume-54.nii is very poor. (0.18). Then I visualized the segmentation result of volume-54.nii and compared it to its ground truth. And then I found there were some dislocation about them.(about 10 slices). For example, segmentation result started with 62th slice,while ground truth started with 52th slice.
How to segment liver tumor, this challenge is multi-label problem, how to process data, if I want to simultaneous segment liver and liver tumor? Looking forward to your answer.
你好,很高兴看到您的代码分享,有一个问题我想问一下,就是模型输出的时候为什么是[B,1,256,256,256],这个1不应该是对应类别吗? 假如说是三分类,模型输出不应该是[B,3,256,256,256]这个样子吗? 希望可以解决一下我的困惑,谢谢!
Hi, Could you pls tell me how long you take to train 1000 epoch?
Hi, I arranged my equipment environment follow your code. But the train loss is unchanged, then I add val loss and find the val loss is also keeping unchanged. I changed the learning rate bigger, but still not effect. Have you encountered this problem?
Can this network be used in training my own MR images?
Could you explain how is alpha valued?why is the value of alpha 0.33?
should we give the input the whole 3D CT images? please explain it
the folder name
'train/ct' and 'train/seg'
please also explain it.
Thanks
ttributeError: 'DataFrame' object has no attribute 'append',
how to fix it?
Hi, In MICCAI-LITS2017, inappropriate dependency versioning constraints can cause risks.
Below are the dependencies and version constraints that the project is using
numpy==1.14.2
torch==1.0.1.post2
visdom==0.1.8.8
pandas==0.23.3
scipy==1.0.0
tqdm==4.40.2
scikit-image==0.13.1
SimpleITK==1.0.1
pydensecrf==1.0rc3
The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.
After further analysis, in this project,
The version constraint of dependency visdom can be changed to >=0.1.8,<=0.1.8.9.
The version constraint of dependency scipy can be changed to >=0.12.0,<=1.7.3.
The version constraint of dependency tqdm can be changed to >=4.36.0,<=4.64.0.
The version constraint of dependency scikit-image can be changed to >=0.12.0,<=0.19.3.
The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.
The invocation of the current project includes all the following methods.
visdom.Visdom.line visdom.Visdom
scipy.ndimage.morphology.binary_erosion scipy.ndimage.zoom scipy.ndimage.morphology.generate_binary_structure
tqdm.tqdm
skimage.morphology.remove_small_holes
SimpleITK.GetArrayFromImage.astype sum.append dice.torch.log.torch.pow.mean pandas.DataFrame.mean self.pred_mask.self.real_mask.sum self.real2pred_nn.sum pydensecrf.utils.create_pairwise_gaussian target.target.pred.pow.sum.sum copy.deepcopy net.ResUNet.ResUNet self.decoder_stage3 self.map3 SimpleITK.GetImageFromArray.SetSpacing pydensecrf.densecrf.DenseCRF.stepInference random.randint self.up_conv3 ct_array.torch.FloatTensor.unsqueeze.astype self.map4 target.pred.sum.sum.sum pred.squeeze.pow end_slice.start_slice.ct_array.torch.FloatTensor.cuda self.pred_mask.sum self.down_conv4 numpy.power self.ce_loss file.replace pred.pow.sum.sum.sum sys.path.append ct.cuda.cuda torch.nn.Upsample torch.log self.real2pred_nn.np.power.sum visdom.Visdom pydensecrf.densecrf.DenseCRF SimpleITK.GetImageFromArray.astype torch.utils.data.DataLoader pandas.DataFrame.max net.ResUNet.net.torch.nn.DataParallel.cuda.parameters torch.save SimpleITK.WriteImage pydensecrf.densecrf.DenseCRF.startInference self.bce_loss self.down_conv1 isinstance self.get_jaccard_index torch.nn.Sequential math.sqrt pydensecrf.utils.create_pairwise_bilateral x.replace.replace self.voxel_sapcing.np.array.reshape self.get_surface utilities.calculate_metrics.Metirc.get_jaccard_index net.ResUNet.ResUNet.torch.nn.DataParallel.cuda.eval dict self.pred2real_nn.sum scipy.ndimage.morphology.binary_erosion.nonzero time.time loss.WBCE.WCELoss torch.nn.init.kaiming_normal_ super.__init__ torch.nn.init.constant_ torch.long pandas.DataFrame numpy.zeros_like len scipy.ndimage.morphology.generate_binary_structure outputs.cpu.detach SimpleITK.GetImageFromArray target.pow.sum.sum print liver_seg.astype.astype skimage.measure.regionprops SimpleITK.GetImageFromArray.SetOrigin self.pred2real_nn.max torch.nn.CrossEntropyLoss loss_func.item target.sum.sum torch.pow torch.no_grad pred.pow.sum.sum SimpleITK.ReadImage.GetOrigin target.pow.sum self.up_conv2 self.decoder_stage2 scipy.ndimage.zoom.astype self.pred2real_nn.np.power.sum net.ResUNet.ResUNet.torch.nn.DataParallel.cuda.load_state_dict numpy.ones ResUNet.parameters os.mkdir self.decoder_stage4 float pandas.ExcelWriter scipy.spatial.cKDTree loss.Dice.DiceLoss scipy.ndimage.morphology.binary_erosion net net.ResUNet.net.torch.nn.DataParallel.cuda.train torch.FloatTensor utilities.calculate_metrics.Metirc list self.real2pred_nn.max torch.nn.PReLU os.path.exists self.encoder_stage2 torch.optim.Adam pred.squeeze.squeeze spacing_list.sort self.map2 numpy.zeros_like.astype Q.np.array.np.argmax.reshape seg.cuda.cuda utilities.calculate_metrics.Metirc.get_RMSD int.para.lower.liver_roi.para.upper.liver_roi.astype.sum SimpleITK.GetImageFromArray.SetDirection utilities.calculate_metrics.Metirc.get_FNR SimpleITK.ReadImage target.pow.sum.sum.sum super s2.s1.mean torch.optim.lr_scheduler.MultiStepLR ct_tensor.unsqueeze.unsqueeze loss.Hybrid.HybridLoss SimpleITK.ReadImage.GetDirection loss.ELDice.ELDiceLoss self.down_conv2 spacing_list.append loss.backward format int self.encoder_stage4 numpy.prod ct_tensor.unsqueeze.unsqueeze.unsqueeze numpy.where self.up_conv4 torch.optim.Adam.step sum self.get_real2pred_nn torch.optim.Adam.zero_grad utilities.calculate_metrics.Metirc.get_dice_coefficient torch.nn.Conv3d torch.clamp self.encoder_stage1 dataset.dataset.Dataset pandas.DataFrame.std self.real_mask.sum utilities.calculate_metrics.Metirc.get_FPR enumerate scipy.ndimage.zoom x.replace net.ResUNet.ResUNet.torch.nn.DataParallel.cuda ct_array.torch.FloatTensor.unsqueeze numpy.stack file.replace.replace torch.nn.Sigmoid target.pred.sum.sum pydensecrf.densecrf.DenseCRF.addPairwiseEnergy os.path.split Q.np.array.np.argmax.reshape.astype pandas.ExcelWriter.save self.map1 pandas.DataFrame.to_excel torch.optim.lr_scheduler.MultiStepLR.step shutil.rmtree net.ResUNet.net.torch.nn.DataParallel.cuda range numpy.any net.ResUNet.net net.cpu torch.nn.DataParallel target.pred.sum target.target.pred.pow.sum.sum.sum param.numel loss.SS.SSLoss zip ResUNet torch.nn.functional.dropout torch.ones_like SimpleITK.GetArrayFromImage visdom.Visdom.line collections.OrderedDict parameter.lower.liver_roi.para.upper.liver_roi.astype self.down_conv3 SimpleITK.ReadImage.GetSpacing loss.Tversky.TverskyLoss torch.cat pydensecrf.densecrf.DenseCRF.setUnaryEnergy numpy.argmax numpy.array utilities.calculate_metrics.Metirc.get_MSD os.path.join loss.Jaccard.JaccardLoss utilities.calculate_metrics.Metirc.get_VOE map self.get_pred2real_nn utilities.calculate_metrics.Metirc.get_ASSD os.listdir time_pre_case.append outputs.cpu.detach.numpy skimage.measure.label torch.load target.pred.sum.sum.sum.mean step_list.append self.encoder_stage3 net.ResUNet.net.torch.nn.DataParallel.cuda.state_dict int.seg_array.astype.sum pred.pow.sum target.pow pydensecrf.densecrf.DenseCRF.inference numpy.squeeze torch.nn.BCELoss target.sum.sum.sum ResUNet.apply file_name.append pandas.DataFrame.min collections.OrderedDict.append numpy.zeros tqdm.tqdm max target.pred.pow torch.FloatTensor.cuda loss.BCE.BCELoss min torch.nn.ConvTranspose3d skimage.morphology.remove_small_holes scipy.spatial.cKDTree.query loss_func target.target.pred.pow.sum self.decoder_stage1 target.sum pydensecrf.utils.unary_from_softmax
@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.
I want to use your code reimplement liver and tumor segmentation in my dataset,so can you share specfic implement steps?
Thanks for your share.
In this code, there is no code about "hybrid dilated convolution"
Can you share it ?
Thanks
Dear author:
My question is how to extract the 3DIRCADb dataset from LiTS dataset? case numver from 27 to 48, then there should be 22 volumes.
Looking forward to your reply. Thank you.
请问这个代码没有测试部分吗
Is there a paper related to this work?
Could you please provide any pre-trained models?
It would be helpful.
能分享一下数据集吗? 登录到谷歌driver下载不下来。 [email protected] 分享个网盘连接也好
I found a bug in 'val.py', which caused abnormal segmentation result ( a poor dice) in some case (eg. volume-54.nii). The number of its slices was divisible by 48 (the 3d patch's depth size),when it was split into test dataset, the value of variable 'count' will be equal 0,but such code
{ pred_seg = np.concatenate([pred_seg, outputs_list[-1][-count:]], axis=0) }
will concentrate the last block (entire 3d 256×256×48 patches) on the segmentation result,which should been only used the last several slices (the number of slices which should be used = count)
--------------------original code -----------------------------
if end_slice is not ct_array.shape[0] - 1:
flag = True
count = ct_array.shape[0] - start_slice
ct_array_list.append(ct_array[-size:, :, :]
We want to train the model on 2 gpus, so we changed the gpu="0, 1" in parameter.py. However, it still use one gpus(gpu0), and we have tried different ways but they do not work. Could you provide any solutions? Thank you.
Hi Sir,
If I'm going to use original CT resolution 512x512 to train liver or tumor segmentation model, what should I do?
For example, network, loss, ... need to modified ?
Any help will be sincerely appreciated. Thanks a lot.
麻烦问一下为什么按照您的步骤最后预测处理的结果和真实标签0 1 是刚好相反的,没有改动代码,如果您看到希望能收到您的回信
Wonder if this method has been used to submit on the LiTS benchmark. If so, would anyone be kind enough to share the evaluation metrics of this method?
Hi Sir,
I am now planning to submit the test results on LiTS, but I don't know the format to submit.
Do you know what data format I need to write for the submission?
Any help will be sincerely appreciated. Thanks a lot.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.