Giter VIP home page Giter VIP logo

medicaldataaugmentationtool-verse's Introduction

Coarse to Fine Vertebrae Localization and Segmentation with SpatialConfiguration-Net and U-Net

Usage

This code has been used for the Verse2019 and Verse2020 challenges, as well as the paper Coarse to Fine Vertebrae Localization and Segmentation with SpatialConfiguration-Net and U-Net. Look into the subfolders verse2019 and verse2020 for running the code and further instructions.

Citation

If you use this code for your research, please cite our paper and the overview paper of the Verse2019 challenge:

@inproceedings{Payer2020,
  title     = {Coarse to Fine Vertebrae Localization and Segmentation with SpatialConfiguration-Net and U-Net},
  author    = {Payer, Christian and {\v{S}}tern, Darko and Bischof, Horst and Urschler, Martin},
  booktitle = {Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP},
  doi       = {10.5220/0008975201240133},
  pages     = {124--133},
  volume    = {5},
  year      = {2020}
}
@misc{Sekuboyina2020verse,
 title         = {VerSe: A Vertebrae Labelling and Segmentation Benchmark},
 author        = {Anjany Sekuboyina and Amirhossein Bayat and Malek E. Husseini and Maximilian Löffler and Markus Rempfler and Jan Kukačka and Giles Tetteh and Alexander Valentinitsch and Christian Payer and Martin Urschler and Maodong Chen and Dalong Cheng and Nikolas Lessmann and Yujin Hu and Tianfu Wang and Dong Yang and Daguang Xu and Felix Ambellan and Stefan Zachowk and Tao Jiang and Xinjun Ma and Christoph Angerman and Xin Wang and Qingyue Wei and Kevin Brown and Matthias Wolf and Alexandre Kirszenberg and Élodie Puybareauq and Björn H. Menze and Jan S. Kirschke},
 year          = {2020},
 eprint        = {2001.09193},
 archivePrefix = {arXiv},
 primaryClass  = {cs.CV}
}

medicaldataaugmentationtool-verse's People

Contributors

christianpayer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

medicaldataaugmentationtool-verse's Issues

Is this a semi-automatic model?

The vertebrae_segmentation should input lardmark.csv and valid_lardmark.csv, but the vertebrae_localization has no output form this two files. So how to produce these two files when I predict a new data?

How to fasten the inference speed?

Very nice project and tools, Thanks for sharing! I am wondering how to improve the inference speed. We make a test with 40 3D images on v100 GPU(32GB), it spend 56 seconds per image which is slow for us. Secondly, We want to decrease the model size to use less momory and make inference faster without too much performance drop. What do you think about it?

Some information about model

Hi @christianpayer,
Thank you so much for your source code and your instruction in the Verse.
But I don't quite understand the models files in docker/models.
Could you give me some detail information about that files, please?
Thank you so much.

Some questions

When training main_ vertebrae_ localization.py, it's very slow,Ican omly train 10000 iters in 24 hours. When some samples are tested, they are stuck. I use one V100 gpu.

ImportError: No module named 'utils.io'

After installing tensorflow,I run the code ,I get the simgle import error :

import utils.io.image
ModuleNotFoundError: no module named 'utils.io
and before this error I get the error:

ImportError: No module named 'utils'

After installing the python-utils,the 'utils' work but 'utils.io' doesn't work

My environment: linux ubuntu 16.04 + Tensorflow 1.3.0 + Python 3.6.2

Any help is highly appreciated.

Not an issue, just a comment: RAI vs LPS confusion

Based on these Slicer Wiki and ITK wiki I think the direction matrix used in the code can also be called LPS orientation, not just RAI, it is matter of using "to" or "from" concepts to represents the letter:

     # LPS: Right to (L)eft, Antiror to (P)ostiror, Inferor to (S)superior
     # RAI: (R)ight to Left, (A)ntiror to Postiror, (I)nferor to Superior
      m = itk.GetMatrixFromArray(np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], np.float64))

I think since all output images have the same orientation probably the naming does not matter.

A question

Thanks for sharing the code. I had a problem running the main_vertebrace_localization.py , which generated an empty predictive image with no content in it. For example, the verse004_prediction.mha is empty, no content in it. How can I solve this problem?

How to verify the output results?

This codes looks good. however it is too complicated due to the lack of instructions.
After completion of training and inference, I can get the .mha files and bbs.csv
How to verify ( or display) the output results (
.mha )?

GL108_CT_ax,0.8339234590530396,8.83392345905304,0.75,96.83392345905304,136.83392345905304,184.75
GL146_CT_ax,3.8125,51.8125,138.875,67.8125,171.8125,242.875
GL195_CT_ax,-12.15625,19.84375,13.25,43.84375,155.84375,141.25
GL216_CT_ax,-3.6572570204734802,52.34274297952652,72.625,108.34274297952652,148.34274297952652,200.625
GL217_CT_ax,-8.64743047952652,31.35256952047348,2.875,95.35256952047348,151.35256952047348,258.875
GL279_CT_ax,9.785285696387291,9.785285696387291,4.5,81.78528569638729,153.7852856963873,180.5
GL348_CT_ax,-14.136589303612709,25.86341069638729,-1.75,73.86341069638729,145.8634106963873,174.25
GL419_CT_ax,-13.15423595905304,18.84576404094696,13.25,74.84576404094696,162.84576404094696,213.25
GL428_CT_ax,4.32711797952652,12.32711797952652,19.375,52.32711797952652,140.32711797952652,107.375
GL492_CT_ax,-14.136589303612709,25.86341069638729,-0.25,89.86341069638729,137.8634106963873,247.75

A question

Hi,
in The Unet class:
self.prediction = Sequential([Conv3D(num_labels, [1] * 3, name='prediction', kernel_initializer=heatmap_layer_kernel_initializer,
activation=None, data_format=data_format, padding=padding),
Activation(None, dtype='float32', name='prediction')])

Why do you use Activation(None) ? What does it do ? Is this a convolutional operation or a fully connected ?

Thanks !

Save Output Images for segmentation during inference

Hi!

I'm finding that the _seg.png image outputed in inference/main_vertebrae_segementation.py function 'test' is blank.
The file size is 2KB, far smaller than the 50KB input image that is generated just before.

I've checked that the variable prediction_labels_resampled is correct and the _seg.nii.gz output is correct too.

Any ideas on how I could fix this?

Errors of runing main_spine_localization.py

Hi, thanks for your open-source toolkit and project. It's very helpful and attractive.
I'm trying to train models of this project. But when I ran the first training file, I met some errors.

This is the information of errors.
The version of Tensorflow is 1.14 and OS is Ubuntu.

Could you tell me how can I solve this problem?
Thank you very much!

1

2

![3](https://user-images.githubusercontent.com/56423748/79443887-65def300-8015-11ea-8589-d0392b7232dc.png)

main_vertebrea_localization problem

Thanks for sharing the code. I had a problem running the main_vertebrace_localization.py , which generated an empty predictive image with no content in it.The error contents is as follows:
Traceback (most recent call last):
File "d:/MedicalDataAugmentationTool/bin/main_vertebrae_localization.py", line 269, in test
curr_landmarks = spine_postprocessing.solve_local_heatmap_maxima(local_maxima_landmarks)
File "D:\MedicalDataAugmentationTool\utils\landmark\spine_postprocessing_graph.py", line 139, in solve_local_heatmap_maxima
shortest_path = nx.shortest_path(G, 's', 't', 'weight', method='bellman-ford')
File "D:\ana\envs\verse\lib\site-packages\networkx\algorithms\shortest_paths\generic.py", line 164, in shortest_path
paths = nx.bellman_ford_path(G, source, target, weight)
File "D:\ana\envs\verse\lib\site-packages\networkx\algorithms\shortest_paths\weighted.py", line 1385, in bellman_ford_path
length, path = single_source_bellman_ford(G, source, target=target, weight=weight)
File "D:\ana\envs\verse\lib\site-packages\networkx\algorithms\shortest_paths\weighted.py", line 1612, in single_source_bellman_ford
dist = _bellman_ford(G, [source], weight, paths=paths, target=target)
File "D:\ana\envs\verse\lib\site-packages\networkx\algorithms\shortest_paths\weighted.py", line 1268, in _bellman_ford
raise nx.NodeNotFound(f"Source {s} not in G")
How can I solve it?

How to generate landmark.csv

Thank you very much for your open source code which helps me a lot. I have a question about the landmark.csv which is a littile different from the official JSON file. It much confuse me about the X coordinate value and how to generate my own landmark.csv.

The shape of GLxxx file is resize !!

It about the size of GLxxx file. I tried to get the 2D - central image from the 3D image. Here is the result of GL 003.

Do y know why the size of these files is not correct?

GL003_11

run main_spine_localization.py ,loss_net = nan

When training main_spine_localization.py, loss_net = nan. I had check all parameter,but all this.
Do you know why? thank you

09:37:40: train iter: 0 loss_net: 0.3521 norm: 4.5986 norm_average: 9.9460 seconds: 11.332
09:38:08: train iter: 100 loss_net: nan norm: nan norm_average: 9.9460 seconds: 28.198
09:38:37: train iter: 200 loss_net: nan norm: nan norm_average: 9.9460 seconds: 29.554
09:39:06: train iter: 300 loss_net: nan norm: nan norm_average: 9.9460 seconds: 28.922
09:39:40: train iter: 400 loss_net: nan norm: nan norm_average: 9.9460 seconds: 33.664
09:40:12: train iter: 500 loss_net: nan norm: nan norm_average: 9.9460 seconds: 32.590

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.