Giter VIP home page Giter VIP logo

segnetcmr's Introduction

SegNetCMR

A Tensorflow implementation of SegNet to segments CMR images

NEW RELEASE

  1. Switched to using the SELU activation function - no more batch norm and is_training hassle - a self-normalising neural network!!!!
  2. To support the above - input images are rescaled from -1 to 1: (2/255.0) * image - 1.
  3. Now updates the results more often and saves the checkpoint less often - this is faster. Also, doesn't flush the results after every write.

Aims

  1. A demonstration of a more complete Tensorflow program including saving state and resuming.
  2. Provide an ready-to-go example of medical segmentation with sufficient training and validation data, in a usable format (PNGs).

Requirements

You must have a GPU and install the tensorflow-gpu version as the cpu version does not have tf.nn.max_pool_with_argmax()

  1. Python >=3.6: Best to use the Conda distribution
  2. tensorflow-gpu >=0.11

Todo

  1. Add code to run on your own data (currently there is only the training code present)

Running

Make sure you have conda and tensorflow installed

conda install tensorflow-gpu
python
Python 3.6.1 | packaged by conda-forge | (default, Sep  8 2016, 14:36:38)

The git clone this repository

git clone https://github.com/mshunshin/SegNetCMR.git

And start the training from the folder

cd /path/to/SegNetCMR
python train.py

And in another terminal window start tensorboard

tensorboard --logdir ./Output

Then in your webbrowser go to http://localhost:6006

Training and test data

Many thanks to the Sunnybrook Health Sciences Centre for providing a set of CMR data with associated contours. Unfortunately, in the latest release the filenames have become a little mangled, and don't match up with the contours. I have gone through the files and matched them up; exported the DICOMS as PNGs and converted the list of coordinates of the contours to PNGs as well.

The first two sets of CMRs are included as training data, the last set as test data.

With thanks to

andreaazzini/segnet: A Tensorflow SegNet translation

pydicom: A pure python dicom library

StackOverflow Tensorflow batch_norm thread

GitHub Tensorflow unpool thread

Issues and annoyances

  1. The original SegNet uses max_pool_with_argmax, and requires an unpool_with_argmax. Unfortunately, Tensorflow does not provide an unpool_with_argmax. Fortunately there is code in the github thread above to make your own.
  2. This version of unpool_with_argmax runs on the CPU not GPU so is a little slower.
  3. Tensorflow does not provide a CPU version of max_pool_with_argmax, so if you don't have a GPU you can't run this.
  4. Tensorflow forgot to include a function for gradients for maxpoolwithargmax, so it is included at the bottom of train.py
  5. The name mangling of the Sunnybrook CMR data - I have fixed this and the data is included in the download.
  6. SegNet works better with a version of softmax that is inversely weighted by class frequency.
  7. Now using SELU as the activation funciton - this allows us to get rid of the Batch Norm (and the associated is_training hassle).

License

SegNetCMR: MIT license

SunnyBrook Cardiac Data: Public Domain

pydicom: MIT license

segnetcmr's People

Contributors

mshunshin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

segnetcmr's Issues

Could not add gradient for MaxPoolWithArgMax

first,Thank u for sharing your program. But I have some Errors about MaxpoolwithArgx.
I use tf1.4 and python 3.5.4, but it shows this error:


Could not add gradient for MaxPoolWithArgMax, Likely installed already (tf 1.4)
"Registering two gradient with name 'MaxPoolWithArgmax' !(Previous registration was in runcode C:\python35\lib\idlelib\run.py:357)"
loading images
finished loading images
Number of examples found: 526
loading images
finished loading images
Number of examples found: 279
Last trained iteration was: 0
Exception
OOM when allocating tensor with shape[6,64,128,128]
[[Node: pool2/conv2_1/conv/Conv2D = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](pool1/maxpool1, pool2/conv2_1/conv/kernel/read)]]
[[Node: Mean_1/_343 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_3247_Mean_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

Caused by op 'pool2/conv2_1/conv/Conv2D', defined at:
File "", line 1, in
File "C:\python35\lib\idlelib\run.py", line 130, in main
ret = method(*args, **kwargs)
File "C:\python35\lib\idlelib\run.py", line 357, in runcode
exec(code, self.locals)
File "B:\SegNetCMR-master\train.py", line 124, in
main()
File "B:\SegNetCMR-master\train.py", line 46, in main
logits, softmax_logits = tfmodel.inference(images, class_inc_bg=2)
File "B:\SegNetCMR-master\tfmodel\inference.py", line 46, in inference
net = c2rb(net, 128, [3, 3], scope='conv2_1')
File "B:\SegNetCMR-master\tfmodel\inference.py", line 28, in c2rb
name='conv')
File "C:\python35\lib\site-packages\tensorflow\python\layers\convolutional.py", line 608, in conv2d
return layer.apply(inputs)
File "C:\python35\lib\site-packages\tensorflow\python\layers\base.py", line 671, in apply
return self.call(inputs, *args, **kwargs)
File "C:\python35\lib\site-packages\tensorflow\python\layers\base.py", line 575, in call
outputs = self.call(inputs, *args, **kwargs)
File "C:\python35\lib\site-packages\tensorflow\python\layers\convolutional.py", line 167, in call
outputs = self._convolution_op(inputs, self.kernel)
File "C:\python35\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 835, in call
return self.conv_op(inp, filter)
File "C:\python35\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 499, in call
return self.call(inp, filter)
File "C:\python35\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 187, in call
name=self.name)
File "C:\python35\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 630, in conv2d
data_format=data_format, name=name)
File "C:\python35\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\python35\lib\site-packages\tensorflow\python\framework\ops.py", line 2956, in create_op
op_def=op_def)
File "C:\python35\lib\site-packages\tensorflow\python\framework\ops.py", line 1470, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[6,64,128,128]
[[Node: pool2/conv2_1/conv/Conv2D = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](pool1/maxpool1, pool2/conv2_1/conv/kernel/read)]]
[[Node: Mean_1/_343 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_3247_Mean_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

Checkpoint Saved
Stopping

regarding dataset

In Sunnybrook dataset there are counters for both epicardium and endocardium . your code is generating masks for left ventricle, a single mask as a whole. can you tell me please the mask is covering the endocardium left ventricle area or epicardium left ventricle area.??

About Paper

Whether this model has been published in relevant literature, I hope to see some detailed descriptions of the network in the literature,thanks

Training from scratch

Hi, great work! As far as I can see you are training the full network from scratch, is there a reason why you don't initialise the encoder weights from a pretrained VGG-16 (first 13 layers) as in the SegNet paper?

About feed value

Thank you for the great work
i run the train.py and system says that : Cannot feed value of shape (0,) for Tensor 'Placeholder:0', which has shape '(6, 256, 256, 1)'
How can i solve this problems?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.