Giter VIP home page Giter VIP logo

3d-convnet-for-alzheimer-s-detection's Introduction

3D-Convolutional-Network-for-Alzheimer's-Detection

This repository consists of an attempt to detect and diagnose Alzheimer's using 3D MRI T1 weighted scans from the ADNI database.It contains a data preprocessing pipeline to make the data suitable for feeding to a 3D Convnet or Voxnet followed by a Deep Neural Network definition and an exploration into all the utilities that could be required for such a task.

Prerequisites

Data Preprocessing

The ideal form in which the brain MRI image data can be sent for training is when it is skull-stripped, resized to a common size and labeled for all the different labels in the classification task.

Data loading

The first step is to load the data into Numpy arrays for futrher manipulation.The python library Nibabel has been used to access the MRI scan using its image object.The data attribute of this object is used to acquire the image in the form of a numpy array.

Visualisation

There is no predefined function in python packages to view 3D images using 3D axis however slice by slice visualisation can be done using matplotlib.Slice by slice visualisation can take place without manual input of the depth value using the method given by Juan Nunez-Iglesias in his blog here.The same method is adopted above however the slice values are increased by tapping 'Q' and decreased by tapping 'A'.The gif below shows this form of visualisation.

Alt text

Skull Stripping

FSL is a library used for analysis and manipulation of MRI brain imaging data.Nipype provides an interface to use the FSL library via python code.Thus this is used to skull strip the images in the given code.The frac parameter is used to pass values for the fractional intensity threshold.A smaller value will give a much better estimate of the brain at the cost of lesser stripping.A sample image has been displayed below after skull stripping it with different frac values.

Alt text

                    frac=0.0

Alt text

                    frac=0.2

Alt text

                    frac=0.5

Histogram Thresholding & Segmentation

An approximation of the amount of grey matter,white matter and CSF(Cerebrospinal Fluid) can be found using Multi Otsu Histogram Thresholding where the maximum variance is given by the formula:

Alt text

There's no predefined function for this in python packages therefore the code for it has been written firsthand in python. A sample of the thresholds derived in a skull stripped images are given in the histogram below.

Alt text

This is a form of global thresholding however better approximations can be made using adaptive and dynamic forms of thresholding where parts of the image are segmented at a time.

Final Touches

All the images are resized to the same dimensions using the predefined skimage transform.resize function. The values of all the pixels are then normalised so that faster training would occur after which the images are grouped with their labels and saved as numpy objects.

3D CNN

A 3D convolutional neural network has been defined using Tflearn which basically serves to provide wrapper functions for the tensorflow framework thus making it easier to create the network.The network uses mini batch gradient descent with batch normalisation for each activation layer.It uses dropout and L2 regularisation to tackle high variance and is optimised by the adam optimiser.It is designed for a 3 classification task with the classes as AD (Alzheimer's Disease), MCI(Mild Cognitive Impairment) and NL(normal). The layers for the network have been defined as per the table given below.The total number of parameters in this network is 7,670,960.

Alt text

However the above network doesn't learn the deep features well enough therefore for experimentation purposes a Resnet network with 3D convolutions was used in hope of better feature learning due to it's shortcut connections. The Renet showed better performance but it wasn't significant.

What's Next

Training a 3D CNN for an end to end task like this is practically possible yet extremely difficult. Through such a procedure if the CNN is very Deep it's likely to overfit and if it's too shallow it's likely to underfit, it will only be able to mark boundaries satisfactorily if enough data is fed and region of interest localisation is done for training. This is largely due to the complexity of the problem.Alternatively the most successful approach has been to train the network on an 3D autoencoder as mentioned in thes paper: Predicting Alzheimer’s disease: a neuroimaging study with 3D convolutional neural networks.All this has been done on an 8GB RAM CPU laptop along with google colaboratory for network training.

3d-convnet-for-alzheimer-s-detection's People

Contributors

rishalaggarwal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

3d-convnet-for-alzheimer-s-detection's Issues

ROI and Patch Extraction Help

Hi Rishal.. I want to do 3D Patch and 3D ROI based Alzheimer detection using 3D CNN but i am not able to find any useful code for that. Can you please help me in whatever way you can.
Thanks

数据集下载

你好我的ADNI数据集下载一直有问题,可以要一份数据集嘛,谢谢

problem regarding dataset

The dataset which i have downloaded from the ADNI database is having multiple folders in each folder one .nii file is there. How to load the these into the code which you provide. Can you explain?
Can you share pre-processed ADNI data?

Uable to load checkpoints of trained data into the testing program.Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?

ckpt='./trained_model1/model.tfl.ckpt-1250.data-00000-of-00001'

This code as mentioned was used to load thw checkpoint and showed the error below and didn't work properly.

2019-04-28 21:10:26.175413: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open /home/jubitta/project/trained_model1/model.tfl.ckpt-1250.data-00000-of-00001: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
2019-04-28 21:10:26.175769: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open /home/jubitta/project/trained_model1/model.tfl.ckpt-1250.data-00000-of-00001: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
2019-04-28 21:10:26.175791: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at save_restore_tensor.cc:175 : Data loss: Unable to open table file /home/jubitta/project/trained_model1/model.tfl.ckpt-1250.data-00000-of-00001: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
Traceback (most recent call last):
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.DataLossError: Unable to open table file /home/jubitta/project/trained_model1/model.tfl.ckpt-1250.data-00000-of-00001: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
[[{{node save_1/RestoreV2}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "testing.py", line 151, in
model.load(ckpt)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tflearn/models/dnn.py", line 308, in load
self.trainer.restore(model_file, weights_only, **optargs)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tflearn/helpers/trainer.py", line 490, in restore
self.restorer.restore(self.session, model_file)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1276, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.DataLossError: Unable to open table file /home/jubitta/project/trained_model1/model.tfl.ckpt-1250.data-00000-of-00001: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
[[node save_1/RestoreV2 (defined at /home/jubitta/anaconda3/lib/python3.6/site-packages/tflearn/helpers/trainer.py:147) ]]

Caused by op 'save_1/RestoreV2', defined at:
File "testing.py", line 141, in
model = tflearn.DNN(net, checkpoint_path = './tested_model1/model.tfl.ckpt',max_checkpoints=1)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tflearn/models/dnn.py", line 65, in init
best_val_accuracy=best_val_accuracy)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tflearn/helpers/trainer.py", line 147, in init
allow_empty=True)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 832, in init
self.build()
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 844, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 881, in _build
build_save=build_save, build_restore=build_restore)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 513, in _build_internal
restore_sequentially, reshape)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 332, in _AddRestoreOps
restore_sequentially)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 580, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1572, in restore_v2
name=name)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
op_def=op_def)
File "/home/jubitta/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1801, in init
self._traceback = tf_stack.extract_stack()

DataLossError (see above for traceback): Unable to open table file /home/jubitta/project/trained_model1/model.tfl.ckpt-1250.data-00000-of-00001: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
[[node save_1/RestoreV2 (defined at /home/jubitta/anaconda3/lib/python3.6/site-packages/tflearn/helpers/trainer.py:147) ]]

Dataloading Issue

Hello dear Rishal
your project is awesome but how you load the data into your memory ?

I mean to say for processing these MR images you have to load them by using a data loader and than prepare their dimension for feeding into your neural network.
You just simply visualze and than nothing in skull stripting to load those skull stripted images?
please explain

Input Error

I am getting error in the code for inputs. Please let me know the dimensions of the images that you are using as I am getting the following error.
ValueError: Error when checking input: expected input to have 5 dimensions, but got array with shape(160, 1)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.