noel-mni / deepfcd Goto Github PK
View Code? Open in Web Editor NEWAutomated Detection of Focal Cortical Dysplasia using Deep Learning
Home Page: https://noel.bic.mni.mcgill.ca/projects/
License: BSD 3-Clause "New" or "Revised" License
Automated Detection of Focal Cortical Dysplasia using Deep Learning
Home Page: https://noel.bic.mni.mcgill.ca/projects/
License: BSD 3-Clause "New" or "Revised" License
When trying to install requirements, antspyx version 0.3.2 does not exist anymore.
How can the internal coregistration via deepmask be reversed
Output is always normalized.
Allow a flag to translate predictions back to input space.
I am trying to run the inference cpu docker container however I am running into an issue that it is not setting the BRAIN_MASKING environ variable correctly and therefore not importing brain_extraction function in the image_processing script.
Error: /tmp/zou_2519.log - No such file or directory.
loading nifti files
registration to MNI template space
performing N3 bias correction
performing brain extraction using ANTsPyNet
Traceback (most recent call last):
File "/app/preprocess.py", line 94, in
noelImageProcessor(
File "/app/deepMask/app/utils/image_processing.py", line 440, in pipeline
self.__skull_stripping()
File "/app/deepMask/app/utils/image_processing.py", line 181, in __skull_stripping
prob = brain_extraction(self._t1_n4, modality="t1")
NameError: name 'brain_extraction' is not defined
I am running the Docker image on my Mac OS w M3 processor.
Thanks
I am currently facing challenges in locating the code for extracting patches from 3D co-registered images and Registration of FLAIR image to T1-weighted image.
This process for extracting 3D patches involves thresholding FLAIR images by z-normalizing intensities and subsequently discarding the bottom 10 percentile intensities. My primary concern pertains to determining the appropriate threshold value for this operation.
And, code for the Linear Registration of FLAIR to T1-weighted Image.
I would greatly appreciate guidance on selecting an optimal threshold value for this operation. Any insights, best practices, or recommended approaches would be invaluable in assisting me with this task.
Your assistance and expertise in addressing this matter would be highly regarded.
When using the relative paths for ${IO_DIRECTORY}
(see README.md
) to execute inference using inference.py
, there's a duplication of the ${IO_DIRECTORY}
in the args.dir
and all subsequent dependent variables -- most notably options['test_folder]
. First reported by @creativedoctor via email
When executing inference using either
./inference.py FCD_001 t1.nii.gz t2.nii.gz ./data cuda0 1 1
or ./inference.py FCD_001 t1.nii.gz t2.nii.gz /home/user/deepFCD/app/data cuda0 1 1
, ./data/FCD_001 should map to /home/user/deepFCD/app/data/FCD_001
Instead, ./data
maps to ./data/FCD_001/./data/FCD_001/
Convert all relative path instances of args.dir
to absolute paths
See Expected Behaviour
${IO_DIRECTORY}
was placed inside the deepFCD
root folder. Untested whether this affects absolute paths outside the deepFCD
root folder
Hello there. Thanks for putting this out there first of all.
I am attempting at setting this up at my workplace (without docker), though I have been having issues with the sys.argv calls.
Namely, right at the beginning of the inference.py on the GPU call (easily circumvented by replacing the call for 'cpu') though this keeps going for line 45 of the inference.py (and I suspect, of the ones following it as well). Using Jupyter lab I believe it is because I do not have an output for sys.argv[3] or over and your script calls for [3], [4] and [5] at some point.
Would you have any ideas on how to deal with this?
Thanks.
theano
backend) doesn't support model weight conversion to other frameworks (pytorch
, tensorflow
, onnx
, etc.)theano
has been deprecateddeepMask
source code is already in pytorch
pytorch
has native support for Mx Apple Silicon (cpu+gpu) and ROCm platformsanaconda
in Colab environment - pygpu
install depends on conda
theano
environment flags required for GPU executionInstallation broken since packages not available anymore and tensorflow version not specified
antspyx==0.3.5 not available
works with the following combination:
antspyx==0.3.8 --only-binary=antspyx
git+https://github.com/ravnoor/atlasreader@master#egg=atlasreader
Theano==1.0.4
keras==2.2.4
h5py==2.10.0
matplotlib==3.5.1
mo-dots==9.147.22086
nibabel==3.2.2
nilearn==0.9.1
numpy==1.18.5
pandas==1.3.5
psutil==5.9.2
scikit-image==0.19.2
scikit-learn==1.0.2
scipy==1.7.3
setproctitle==1.2.3
tabulate==0.9.0
tqdm==4.64.0
xlrd==2.0.1
tensorflow==1.15.5
tensorflow_probability==0.8
--help
menuI am trying to understand the code and have read the research paper, which states that when using the cascading model, voxels from CNN1 are supposed to be thresholded and then fed to CNN2. Specifically, the paper mentions that "the mean of 20 forward passes (or predictions) was thresholded at >0.1 (equivalent to rejecting bottom 10 percentile probabilities); voxels surviving this threshold served as the input to sample patches for CNN-2."
However, I have been unable to find the code for this thresholding operation in the train.py and base.py (train_mode()l method) file.
I would appreciate it if someone could help me locate this code or provide further information on how it is implemented.
antspynet requires tensorflow >= 2.9
Description:
In the patch_dataloader.py file's load_training_data() method, you have performed a squeezing operation on the 'Y' array using the following code:
if Y.shape[3] == 1:
Y = Y[:, Y.shape[1] // 2, Y.shape[2] // 2, :]
else:
Y = Y[:, Y.shape[1] // 2, Y.shape[2] // 2, Y.shape[3] // 2]
Y = np.squeeze(Y)
Query:
What is the purpose and necessity of this operation?
And why have you selected 'Y.shape[1] // 2, Y.shape[2] // 2, Y.shape[3] // 2' values from the patch having size (16,16,16)?
Your insights would be greatly appreciated.
Hey,
I am trying transfer learning on my data with given weights. Do you have updated training routing script for this?Do you now if it possible to translate Theano weight to Pytorch?
Thanks, VY
I downloaded the project and tried to run it in some nifti files of Cortical Displasias I got. As I do not have a CUDA compatible GPU I tried to run it on my CPU, however when trying to perform the inference step in 1 image, it gets stuck at 0% during at least 5 hours without any advance.
The specifications of my computer are the ones that follow:
MacBook Pro 2020, OSX: Ventura 13, CPU: 2,3 GHz Intel Core i7 4 cores, RAM: 16 GB 3733 MHz LPDDR4X
This should be = label_list[l]
, instead of ] = num_elements_by_lesion[l].astype(np.int)
right?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.