Giter VIP home page Giter VIP logo

noel-mni / deepfcd Goto Github PK

View Code? Open in Web Editor NEW
7.0 5.0 3.0 32.65 MB

Automated Detection of Focal Cortical Dysplasia using Deep Learning

Home Page: https://noel.bic.mni.mcgill.ca/projects/

License: BSD 3-Clause "New" or "Revised" License

Python 51.47% Dockerfile 2.10% Jupyter Notebook 44.19% Shell 1.06% Makefile 1.18%
epilepsy machine-learning python focal-cortical-dysplasia automated-fcd-detection fcd deep-learning keras

deepfcd's Introduction

Code repository for:
Multicenter Validated Detection of Focal Cortical Dysplasia using Deep Learning

DOI/article DOI/data


Please cite:

Gill, R. S., Lee, H. M., Caldairou, B., Hong, S. J., Barba, C., Deleo, F., D'Incerti, L., Mendes Coelho, V. C., Lenge, M., Semmelroch, M., Schrader, D. V., Bartolomei, F., Guye, M., Schulze-Bonhage, A., Urbach, H., Cho, K. H., Cendes, F., Guerrini, R., Jackson, G., Hogan, R. E., … Bernasconi, A. (2021). Multicenter Validation of a Deep Learning Detection Algorithm for Focal Cortical Dysplasia. Neurology, 97(16), e1571–e1582. https://doi.org/10.1212/WNL.0000000000012698

OR

@article{GillFCD2021,
  title = {Multicenter Validated Detection of Focal Cortical Dysplasia using Deep Learning},
  author = {Gill, Ravnoor Singh and Lee, Hyo-Min and Caldairou, Benoit and Hong, Seok-Jun and Barba, Carmen and Deleo, Francesco and D'Incerti, Ludovico and Coelho, Vanessa Cristina Mendes and Lenge, Matteo and Semmelroch, Mira and others},
  journal = {Neurology},
  year = {2021},
  publisher = {American Academy of Neurology},
  code = {\url{https://github.com/NOEL-MNI/deepFCD}},
  doi = {https://doi.org/10.1212/WNL.0000000000012698}
}

Pre-requisites

0. Anaconda Python Environment
1. Python == 3.8
2. Keras == 2.2.4
3. Theano == 1.0.4
4. ANTsPy == 0.4.2 (for MRI preprocessing)
4. ANTsPyNet == 0.2.3 (for deepMask)
5. PyTorch == 1.8.2 LTS (for deepMask)
6. h5py == 2.10.0
+ app/requirements.txt
+ app/deepMask/app/requirements.txt

Installation

# clone the repo with the deepMask submodule
git clone --recurse-submodules -j2 https://github.com/NOEL-MNI/deepFCD.git
cd deepFCD

# install Miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh
bash ~/miniconda.sh -b -p $HOME/miniconda

# create and activate a Conda environment for preprocessing
conda create -n preprocess python=3.8
conda activate preprocess
# install dependencies using pip
python -m pip install -r app/deepMask/app/requirements.txt
conda deactivate

# create and activate a Conda environment for deepFCD
conda create -n deepFCD python=3.8
conda activate deepFCD
# install dependencies using pip
python -m pip install -r app/requirements.txt
conda install -c conda-forge pygpu=0.7.6

Usage

1. Directory Organization

The assumed organization of the directories is specified below:

${IO_DIRECTORY}
└── ${PATIENT_ID}/              # [this patient-specific directory is contained within ${IO_DIRECTORY}]
    ├── noel_deepFCD_dropoutMC  # [deepFCD output images]
    │   ├── ${PATIENT_ID}_noel_deepFCD_dropoutMC_prob_mean_0.nii.gz # [mean PROBABILITY image from CNN-1]
    │   ├── ${PATIENT_ID}_noel_deepFCD_dropoutMC_prob_mean_1.nii.gz # [mean PROBABILITY image from CNN-2]
    │   ├── ${PATIENT_ID}_noel_deepFCD_dropoutMC_prob_var_0.nii.gz  # [mean UNCERTAINTY image from CNN-1]
    │   └── ${PATIENT_ID}_noel_deepFCD_dropoutMC_prob_var_1.nii.gz  # [mean UNCERTAINTY image from CNN-2]
    ├── ${T1_IMAGE}.nii.gz
    └── ${FLAIR_IMAGE}.nii.gz

2. Training routine [TODO]

3.1 Inference (CPU)

chmod +x ./app/inference.py   # make the script executable -ensure you have the requisite permissions
export OMP_NUM_THREADS=6    \ # specify number of threads to initialize when using the CPU - by default this variable is set to half the number of available logical cores
./app/inference.py     \ # the script to perform inference on the multimodal MRI images
    ${PATIENT_ID}      \ # prefix for the filenames; for example: FCD_001 (needed for outputs only)
    ${T1_IMAGE}        \ # T1-weighted image; for example: FCD_001_t1.nii.gz or t1.nii.gz [T1 is specified before FLAIR - order is important]
    ${FLAIR_IMAGE}     \ # T2-weighted FLAIR image; for example: FCD_001_t2.nii.gz or flair.nii.gz [T1 is specified before FLAIR - order is important]
    ${IO_DIRECTORY}    \ # input/output directory
    cpu                \ # toggle b/w CPU/GPU - string specifies CPU ('cpu') or GPU ID ('cudaX', where N is in the range (0,N), where N is the total number of installed GPUs)
    1                  \ # perform (`1`) or not perform (`0`) brain extraction
    1                  \ # perform (`1`) or not perform (`0`) image pre-processing

example:

./app/inference.py FCD_001 T1.nii.gz FLAIR.nii.gz /io cpu 1 1

3.2 Inference (GPU)

chmod +x ./app/inference.py   # make the script executable -ensure you have the requisite permissions
./app/inference.py     \ # the script to perform inference on the multimodal MRI images
    ${PATIENT_ID}      \ # prefix for the filenames; for example: FCD_001 (needed for outputs only)
    ${T1_IMAGE}        \ # T1-weighted image; for example: FCD_001_t1.nii.gz or t1.nii.gz [T1 is specified before FLAIR - order is important]
    ${FLAIR_IMAGE}     \ # T2-weighted FLAIR image; for example: FCD_001_t2.nii.gz or flair.nii.gz [T1 is specified before FLAIR - order is important]
    ${IO_DIRECTORY}    \ # input/output directory
    cuda0              \ # toggle b/w CPU/GPU - string specifies CPU ('cpu') or GPU ID ('cudaX', where N is in the range (0,N), where N is the total number of installed GPUs)
    1                  \ # perform (`1`) or not perform (`0`) brain extraction
    1                  \ # perform (`1`) or not perform (`0`) image pre-processing

example:

./app/inference.py FCD_001 T1.nii.gz FLAIR.nii.gz /io cuda0 1 1

3.3 Inference using Docker (GPU), requires nvidia-container-toolkit

docker run --rm -it --init \
    --gpus=all                 \ # expose the host GPUs to the guest docker container
    --user="$(id -u):$(id -g)" \ # map user permissions appropriately
    --volume="${IO_DIRECTORY}:/io"   \ # $PWD refers to the present working directory containing the input images, can be modified to a local host directory
    noelmni/deep-fcd:latest    \ # docker image containing all the necessary software dependencies
    /app/inference.py  \ # the script to perform inference on the multimodal MRI images
    ${PATIENT_ID}      \ # prefix for the filenames; for example: FCD_001 (needed for outputs only)
    ${T1_IMAGE}        \ # T1-weighted image; for example: FCD_001_t1.nii.gz or t1.nii.gz [T1 is specified before FLAIR - order is important]
    ${FLAIR_IMAGE}     \ # T2-weighted FLAIR image; for example: FCD_001_t2.nii.gz or flair.nii.gz [T1 is specified before FLAIR - order is important]
    /io                \ # input/output directory within the container mapped to ${IO_DIRECTORY} or ${PWD} [ DO NOT MODIFY]
    cuda0              \ # toggle b/w CPU/GPU - string specifies CPU ('cpu') or GPU ID ('cudaX', where N is in the range (0,N), where N is the total number of installed GPUs)
    1                  \ # perform (`1`) or not perform (`0`) brain extraction
    1                  \ # perform (`1`) or not perform (`0`) image pre-processing

example:

docker run --rm -it --init --gpus=all --volume=$PWD/io:/io noelmni/deep-fcd:latest /app/inference.py FCD_001 T1.nii.gz FLAIR.nii.gz /io cuda0 1 1

3.4 Inference using Docker (CPU)

docker run --rm -it --init \
    --user="$(id -u):$(id -g)" \ # map user permissions appropriately
    --volume="${IO_DIRECTORY}:/io" \ # $PWD refers to the present working directory containing the input images, can be modified to a local host directory
    --env OMP_NUM_THREADS=6    \ # specify number of threads to initialize - by default this variable is set to half the number of available logical cores
    noelmni/deep-fcd:latest    \ # docker image containing all the necessary software dependencies
    /app/inference.py  \ # the script to perform inference on the multimodal MRI images
    ${PATIENT_ID}      \ # prefix for the filenames; for example: FCD_001 (needed for outputs only)
    ${T1_IMAGE}        \ # T1-weighted image; for example: FCD_001_t1.nii.gz or t1.nii.gz [T1 is specified before FLAIR - order is important]
    ${FLAIR_IMAGE}     \ # T2-weighted FLAIR image; for example: FCD_001_t2.nii.gz or flair.nii.gz [T1 is specified before FLAIR - order is important]
    /io                \ # input/output directory within the container mapped to ${IO_DIRECTORY} or ${PWD} [ DO NOT MODIFY]
    cpu                \ # toggle b/w CPU/GPU - string specifies CPU ('cpu') or GPU ID ('cudaX', where N is in the range (0,N), where N is the total number of installed GPUs)
    1                  \ # perform (`1`) or not perform (`0`) brain extraction
    1                  \ # perform (`1`) or not perform (`0`) image pre-processing

example:

docker run --rm -it --init --env OMP_NUM_THREADS=6 --volume=$PWD/io:/io noelmni/deep-fcd:latest /app/inference.py FCD_001 T1.nii.gz FLAIR.nii.gz /io cpu 1 1

4. Reporting

example output

4.1 Reporting output

chmod +x ./app/utils/reporting.py
./app/utils/reporting.py ${PATIENT_ID} ${IO_DIRECTORY}

example:

./app/utils/reporting.py FCD_001 /io

4.2 Reporting output using Docker

docker run --rm -it --init \
    --user="$(id -u):$(id -g)"
    --volume="${IO_DIRECTORY}:/io" noelmni/deep-fcd:latest
    /app/utils/reporting.py ${PATIENT_ID} /io

example:

docker run --rm -it --init --gpus=all --volume=$PWD/io:/io noelmni/deep-fcd:latest /app/utils/reporting.py FCD_001 /io

License

Copyright 2023 Neuroimaging of Epilepsy Laboratory, McGill University

deepfcd's People

Contributors

ravnoor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

deepfcd's Issues

Query Regarding Squeezing Operation in patch_dataloader.py's load_training_data() Method

Description:
In the patch_dataloader.py file's load_training_data() method, you have performed a squeezing operation on the 'Y' array using the following code:

if Y.shape[3] == 1:
    Y = Y[:, Y.shape[1] // 2, Y.shape[2] // 2, :]
else:
    Y = Y[:, Y.shape[1] // 2, Y.shape[2] // 2, Y.shape[3] // 2]
Y = np.squeeze(Y)

Query:
What is the purpose and necessity of this operation?
And why have you selected  'Y.shape[1] // 2, Y.shape[2] // 2, Y.shape[3] // 2' values from the patch having size (16,16,16)?
Your insights would be greatly appreciated.

Inference cpu Docker container

I am trying to run the inference cpu docker container however I am running into an issue that it is not setting the BRAIN_MASKING environ variable correctly and therefore not importing brain_extraction function in the image_processing script.

Error: /tmp/zou_2519.log - No such file or directory.
loading nifti files
registration to MNI template space
performing N3 bias correction
performing brain extraction using ANTsPyNet
Traceback (most recent call last):
File "/app/preprocess.py", line 94, in
noelImageProcessor(
File "/app/deepMask/app/utils/image_processing.py", line 440, in pipeline
self.__skull_stripping()
File "/app/deepMask/app/utils/image_processing.py", line 181, in __skull_stripping
prob = brain_extraction(self._t1_n4, modality="t1")
NameError: name 'brain_extraction' is not defined

I am running the Docker image on my Mac OS w M3 processor.

Thanks

Output predictions in structural space

Description

How can the internal coregistration via deepmask be reversed

Actual Behavior

Output is always normalized.

Possible Fix.

Allow a flag to translate predictions back to input space.

Installation broken since packages not available anymore and tensorflow version not specified

Installation broken since packages not available anymore and tensorflow version not specified

Description

antspyx==0.3.5 not available

Expected Behavior

Actual Behavior

Possible Fix

works with the following combination:

antspyx==0.3.8 --only-binary=antspyx
git+https://github.com/ravnoor/atlasreader@master#egg=atlasreader
Theano==1.0.4
keras==2.2.4
h5py==2.10.0
matplotlib==3.5.1
mo-dots==9.147.22086
nibabel==3.2.2
nilearn==0.9.1
numpy==1.18.5
pandas==1.3.5
psutil==5.9.2
scikit-image==0.19.2
scikit-learn==1.0.2
scipy==1.7.3
setproctitle==1.2.3
tabulate==0.9.0
tqdm==4.64.0
xlrd==2.0.1
tensorflow==1.15.5
tensorflow_probability==0.8

Steps to Reproduce

Context

Your Environment

  • Version used (e.g. v1.1.1 or commit hash):
  • Environment name and version [e.g. Docker (noelmni/deep-fcd:v1.1.2), M1/Intel macOS 11.6.5, Linux (Conda 4.11 w/ Python 3.7)]:
  • Hardware (e.g., 4 vCPUs, 16GB RAM, NVIDIA TITAN RTX w/ 24GB VRAM)
  • Using GPU or CPU:
  • Operating System (OS) and version (if not using the Docker image; e.g. Ubuntu 20.04.3 LTS):
  • OS baremetal or virtualized:

Unable to locate code for thresholding and feeding voxels from CNN1 to CNN2

Description

I am trying to understand the code and have read the research paper, which states that when using the cascading model, voxels from CNN1 are supposed to be thresholded and then fed to CNN2. Specifically, the paper mentions that "the mean of 20 forward passes (or predictions) was thresholded at >0.1 (equivalent to rejecting bottom 10 percentile probabilities); voxels surviving this threshold served as the input to sample patches for CNN-2."

However, I have been unable to find the code for this thresholding operation in the train.py and base.py (train_mode()l method) file.
I would appreciate it if someone could help me locate this code or provide further information on how it is implemented.

Additional information:

  • I have read the research paper and the relevant sections of the code.
  • I have searched for the relevant keywords in the code, but I have not been able to find the specific code for the thresholding operation.

Migrate code base to `pytorch`

  • Keras (w/ theano backend) doesn't support model weight conversion to other frameworks (pytorch, tensorflow, onnx, etc.)
  • Original theano has been deprecated
  • deepMask source code is already in pytorch
  • pytorch has native support for Mx Apple Silicon (cpu+gpu) and ROCm platforms

Issue with sys.argv

Hello there. Thanks for putting this out there first of all.
I am attempting at setting this up at my workplace (without docker), though I have been having issues with the sys.argv calls.
Namely, right at the beginning of the inference.py on the GPU call (easily circumvented by replacing the call for 'cpu') though this keeps going for line 45 of the inference.py (and I suspect, of the ones following it as well). Using Jupyter lab I believe it is because I do not have an output for sys.argv[3] or over and your script calls for [3], [4] and [5] at some point.
Would you have any ideas on how to deal with this?
Thanks.

CPU running problems

I downloaded the project and tried to run it in some nifti files of Cortical Displasias I got. As I do not have a CUDA compatible GPU I tried to run it on my CPU, however when trying to perform the inference step in 1 image, it gets stuck at 0% during at least 5 hours without any advance.

The specifications of my computer are the ones that follow:

MacBook Pro 2020, OSX: Ventura 13, CPU: 2,3 GHz Intel Core i7 4 cores, RAM: 16 GB 3733 MHz LPDDR4X

Training routine

Hey,
I am trying transfer learning on my data with given weights. Do you have updated training routing script for this?Do you now if it possible to translate Theano weight to Pytorch?
Thanks, VY

Issue using relative paths for data directories

Description

When using the relative paths for ${IO_DIRECTORY} (see README.md) to execute inference using inference.py, there's a duplication of the ${IO_DIRECTORY} in the args.dir and all subsequent dependent variables -- most notably options['test_folder]. First reported by @creativedoctor via email

Expected Behavior

When executing inference using either
./inference.py FCD_001 t1.nii.gz t2.nii.gz ./data cuda0 1 1 or ./inference.py FCD_001 t1.nii.gz t2.nii.gz /home/user/deepFCD/app/data cuda0 1 1, ./data/FCD_001 should map to /home/user/deepFCD/app/data/FCD_001

Actual Behavior

Instead, ./data maps to ./data/FCD_001/./data/FCD_001/

Possible Fix

Convert all relative path instances of args.dir to absolute paths

Steps to Reproduce

See Expected Behaviour

Context

${IO_DIRECTORY} was placed inside the deepFCD root folder. Untested whether this affects absolute paths outside the deepFCD root folder

Your Environment

  • Version used (e.g. v1.1.1 or commit hash): d13c903
  • Environment name and version: Linux (Conda 4.11 w/ Python 3.7.13)
  • Hardware: 40 logical cores, 512GB RAM, NVIDIA TITAN RTX w/ 24GB VRAM
  • Using GPU or CPU: issue persists under both GPU and CPU variants
  • Operating System (OS) and version: Ubuntu 20.04.3 LTS x86_64
  • OS baremetal or virtualized: baremetal

Inquiry Regarding the Extraction of Patches from 3D Co-Registered Images and Linear Registration of FLAIR to T1-weighted Images

Description

I am currently facing challenges in locating the code for extracting patches from 3D co-registered images and Registration of FLAIR image to T1-weighted image.
This process for extracting 3D patches involves thresholding FLAIR images by z-normalizing intensities and subsequently discarding the bottom 10 percentile intensities. My primary concern pertains to determining the appropriate threshold value for this operation.

And, code for the Linear Registration of FLAIR to T1-weighted Image.

Query:

I would greatly appreciate guidance on selecting an optimal threshold value for this operation. Any insights, best practices, or recommended approaches would be invaluable in assisting me with this task.

Your assistance and expertise in addressing this matter would be highly regarded.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.