robbert-harms / mdt Goto Github PK
View Code? Open in Web Editor NEWMicrostructure Diffusion Toolbox
License: GNU Lesser General Public License v3.0
Microstructure Diffusion Toolbox
License: GNU Lesser General Public License v3.0
Hi there,
Am running numpy 1.24.0 and get the error 'AttributeError: module ‘numpy’ has no attribute ‘float’'
Seems like this will be a major problem going forwards? Can be solved by either replacing all calls with np.float64() or just remove np. and leave as foo = float().
Also problems with np.bool - needs to be replaced with bool
One could downgrade numpy version, but I think that won't be a long term solution?
Thoughts?
Update: After fixing all of the np.bool and np.float() calls, I still get a segmentation fault for Powell optimiser. However, no problems for LM.
Robbert,
Thank you so much for making this code freely available, it's a great package and was working fine until a few months ago.
Now I am unable to run it without getting the following errors. This is being run on Linux, OS is CentOS, and it's attempting to run the NODDI module. The errors would seem to be related to OpenCL, but not sure what changed on the system that would have generated this error, which I was not getting previously (it is possible that IT had in the interim updated python or some other module which MDT is using, but haven't been able to figure that out. Thanks for any advice on how to fix this.
Best,
Mark
Traceback (most recent call last):
File "/public/apps/anaconda/3-5.2.0/envs/mdt/bin/mdt-model-fit", line 6, in
from mdt.cli_scripts.mdt_model_fit import ModelFit
File "/public/apps/anaconda/3-5.2.0/envs/mdt/lib/python3.7/site-packages/mdt/init.py", line 8, in
import mot
File "/public/apps/anaconda/3-5.2.0/envs/mdt/lib/python3.7/site-packages/mot/init.py", line 3, in
from .optimize import minimize, get_minimizer_options
File "/public/apps/anaconda/3-5.2.0/envs/mdt/lib/python3.7/site-packages/mot/optimize/init.py", line 1, in
from mot.lib.cl_function import SimpleCLFunction
File "/public/apps/anaconda/3-5.2.0/envs/mdt/lib/python3.7/site-packages/mot/lib/cl_function.py", line 6, in
from mot.configuration import CLRuntimeInfo
File "/public/apps/anaconda/3-5.2.0/envs/mdt/lib/python3.7/site-packages/mot/configuration.py", line 20, in
from .lib.cl_environments import CLEnvironmentFactory
File "/public/apps/anaconda/3-5.2.0/envs/mdt/lib/python3.7/site-packages/mot/lib/cl_environments.py", line 180, in
_cl_environment_cache = _initialize_cl_environment_cache()
File "/public/apps/anaconda/3-5.2.0/envs/mdt/lib/python3.7/site-packages/mot/lib/cl_environments.py", line 160, in _initialize_cl_environment_cache
for platform in cl.get_platforms():
pyopencl._cl.LogicError: clGetPlatformIDs failed: PLATFORM_NOT_FOUND_KHR
Hi,
my issue regards the option "patience" for the fit processing: as I understood, it should be used to set the number of iterations of the fit process. The question is "What is the difference between the different iterations? The starting points for the non-initialized parameters?" "If yes, which is the stochastic method used to select the starting points? Is there a kind of perturbation to the first set of starting-points?"
Regards
Stefania Oliviero
Dear Robbert,
I was wondering if it is possible to install/build MDT without sudo privileges on a CentOS 6.10 cluster.
Best,
Arash
Hi all, I would like to know the difference between the models CHARMED-r1, CHARMED-r2, and CHARMED-r3. Unfortunately, I don't understand where to find the description of the model parameters.
Thanks a lot for the support.
Regards
Stefania Oliviero
Hi Robbert,
I am noticing some odd behaviour when running mdt-model-fit, where the model I specify is not run, but a model that was run previously (but terminated without completion) is chosen instead.
I am running on intel CPU with singularity, v0.20.3.
E.g. when I use the model Tensor, it ends up running BallStick_r1 first (which was the job I initially ran, then terminated), and when that is complete, it uses the Tensor model:
Singularity khanlab_mdt_v0.20.3.img:/scratch/akhanf/test_mdt_custom/7T> mdt-model-fit Tensor sub-CT01_dwi_space-T1wGC_preproc.nii.gz sub-CT01_dwi_space-T1wGC_preproc.prtcl sub-CT01_dwi_space-T1wGC_brainmask.nii.gz
[2019-03-31 20:30:45,719] [INFO] [mdt.lib.model_fitting] [get_model_fit] - Starting intermediate optimization for generating initialization point.
[2019-03-31 20:30:45,866] [INFO] [mdt.lib.model_fitting] [_apply_user_provided_initialization_data] - Preparing model BallStick_r1 with the user provided initialization data.
[2019-03-31 20:30:45,873] [INFO] [mdt.lib.model_fitting] [fit_composite_model] - Using MDT version 0.20.3
[2019-03-31 20:30:45,874] [INFO] [mdt.lib.model_fitting] [fit_composite_model] - Preparing for model BallStick_r1
[2019-03-31 20:30:45,874] [INFO] [mdt.lib.model_fitting] [fit_composite_model] - Current cascade: ['BallStick_r1']
[2019-03-31 20:30:47,199] [INFO] [mdt.models.composite] [_prepare_input_data] - No volume options to apply, using all 103 volumes.
[2019-03-31 20:30:47,200] [INFO] [mdt.utils] [estimate_noise_std] - Trying to estimate a noise std.
[2019-03-31 20:30:47,314] [INFO] [mdt.utils] [estimate_noise_std] - Estimated global noise std 37.394683837890625.
[2019-03-31 20:30:47,315] [INFO] [mdt.lib.model_fitting] [_model_fit_logging] - Fitting BallStick_r1 model
[2019-03-31 20:30:47,316] [INFO] [mdt.lib.model_fitting] [_model_fit_logging] - The 4 parameters we will fit are: ['S0.s0', 'w_stick0.w', 'Stick0.theta', 'Stick0.phi']
[2019-03-31 20:30:47,316] [INFO] [mdt.lib.model_fitting] [fit_composite_model] - Saving temporary results in output/sub-CT01_dwi_space-T1wGC_brainmask/BallStick_r1/tmp_results.
[2019-03-31 20:30:47,354] [INFO] [mdt.lib.processing_strategies] [_process_chunk] - Computations are at 0.00%, processing next 100000 voxels (399714 voxels in total, 0 processed). Time spent: 0:00:00:00, time left: ? (d:h:m:s).
[2019-03-31 20:30:47,410] [INFO] [mdt.lib.processing_strategies] [_process] - Starting optimization
[2019-03-31 20:30:47,410] [INFO] [mdt.lib.processing_strategies] [_process] - Using MOT version 0.9.1
[2019-03-31 20:30:47,410] [INFO] [mdt.lib.processing_strategies] [_process] - We will use a single precision float type for the calculations.
[2019-03-31 20:30:47,410] [INFO] [mdt.lib.processing_strategies] [_process] - Using device 'CPU - Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz (Intel(R) OpenCL)'.
[2019-03-31 20:30:47,410] [INFO] [mdt.lib.processing_strategies] [_process] - Using compile flags: ['-cl-denorms-are-zero', '-cl-mad-enable', '-cl-no-signed-zeros']
[2019-03-31 20:30:47,411] [INFO] [mdt.lib.processing_strategies] [_process] - We will use the optimizer Powell with default settings.
/usr/lib/python3/dist-packages/pyopencl/__init__.py:63: CompilerWarning: Non-empty compiler output encountered. Set the environment variable PYOPENCL_COMPILER_OUTPUT=1 to see more.
"to see more.", CompilerWarning)
[2019-03-31 20:31:45,677] [INFO] [mdt.lib.processing_strategies] [_process] - Finished optimization
[2019-03-31 20:31:45,678] [INFO] [mdt.lib.processing_strategies] [_process] - Starting post-processing
[2019-03-31 20:32:54,010] [INFO] [mdt.lib.processing_strategies] [_process] - Finished post-processing
[2019-03-31 20:32:54,307] [INFO] [mdt.lib.processing_strategies] [_process_chunk] - Computations are at 25.02%, processing next 100000 voxels (399714 voxels in total, 100000 processed). Time spent: 0:00:02:06, time left: 0:00:06:20 (d:h:m:s).
[2019-03-31 20:35:02,886] [INFO] [mdt.lib.processing_strategies] [_process_chunk] - Computations are at 50.04%, processing next 100000 voxels (399714 voxels in total, 200000 processed). Time spent: 0:00:04:15, time left: 0:00:04:15 (d:h:m:s).
[2019-03-31 20:37:09,950] [INFO] [mdt.lib.processing_strategies] [_process_chunk] - Computations are at 75.05%, processing next 99714 voxels (399714 voxels in total, 300000 processed). Time spent: 0:00:06:22, time left: 0:00:02:07 (d:h:m:s).
[2019-03-31 20:39:15,644] [INFO] [mdt.lib.processing_strategies] [_process_chunk] - Computations are at 100%
[2019-03-31 20:39:15,644] [INFO] [mdt.lib.processing_strategies] [process] - Computed all voxels, now creating nifti's
[2019-03-31 20:39:19,118] [INFO] [mdt.lib.model_fitting] [_model_fit_logging] - Fitted BallStick_r1 model with runtime 0:00:08:31 (d:h:m:s).
[2019-03-31 20:39:19,166] [INFO] [mdt.lib.model_fitting] [get_model_fit] - Finished intermediate optimization for generating initialization point.
[2019-03-31 20:39:19,818] [INFO] [mdt.lib.model_fitting] [_apply_user_provided_initialization_data] - Preparing model Tensor with the user provided initialization data.
[2019-03-31 20:39:19,858] [INFO] [mdt.lib.model_fitting] [fit_composite_model] - Using MDT version 0.20.3
[2019-03-31 20:39:19,859] [INFO] [mdt.lib.model_fitting] [fit_composite_model] - Preparing for model Tensor
[2019-03-31 20:39:19,859] [INFO] [mdt.lib.model_fitting] [fit_composite_model] - Current cascade: ['Tensor']
[2019-03-31 20:39:20,425] [INFO] [mdt.models.composite] [_prepare_input_data] - For this model, Tensor, we will use a subset of the volumes.
[2019-03-31 20:39:20,426] [INFO] [mdt.models.composite] [_prepare_input_data] - Using 42 out of 103 volumes, indices: [0 2 5 8 11 14 17 18 21 24 27 30 33 34 37 40 43 46 49 50 53 56 59 62 65 66 69 72 75 78 81 82 85 88 91 94 96 97 99 100 101 102]
[2019-03-31 20:39:20,547] [INFO] [mdt.lib.model_fitting] [_model_fit_logging] - Fitting Tensor model
[2019-03-31 20:39:20,548] [INFO] [mdt.lib.model_fitting] [_model_fit_logging] - The 7 parameters we will fit are: ['S0.s0', 'Tensor.d', 'Tensor.dperp0', 'Tensor.dperp1', 'Tensor.theta', 'Tensor.phi', 'Tensor.psi']
[2019-03-31 20:39:20,548] [INFO] [mdt.lib.model_fitting] [fit_composite_model] - Saving temporary results in output/sub-CT01_dwi_space-T1wGC_brainmask/Tensor/tmp_results.
[2019-03-31 20:39:20,583] [INFO] [mdt.lib.processing_strategies] [_process_chunk] - Computations are at 0.00%, processing next 100000 voxels (399714 voxels in total, 0 processed). Time spent: 0:00:00:00, time left: ? (d:h:m:s).
[2019-03-31 20:39:20,634] [INFO] [mdt.lib.processing_strategies] [_process] - Starting optimization
[2019-03-31 20:39:20,634] [INFO] [mdt.lib.processing_strategies] [_process] - Using MOT version 0.9.1
[2019-03-31 20:39:20,634] [INFO] [mdt.lib.processing_strategies] [_process] - We will use a single precision float type for the calculations.
[2019-03-31 20:39:20,634] [INFO] [mdt.lib.processing_strategies] [_process] - Using device 'CPU - Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz (Intel(R) OpenCL)'.
[2019-03-31 20:39:20,635] [INFO] [mdt.lib.processing_strategies] [_process] - Using compile flags: ['-cl-denorms-are-zero', '-cl-mad-enable', '-cl-no-signed-zeros']
[2019-03-31 20:39:20,635] [INFO] [mdt.lib.processing_strategies] [_process] - We will use the optimizer Powell with default settings.
(output truncated...)
Any idea why this might be happening, or if there is some internal queue that could be cleared manually?
Thanks!
Ali
Hi,
I have a problem in my code, where I am attempting to analyse multishell diffusion data using different models. It seems everything is loading properly until...
[2023-04-27 14:24:24,341] [INFO] [mdt.lib.processing.model_fitting] [get_model_fit] - Starting intermediate optimization for generating initialization point.
[2023-04-27 14:24:24,454] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Using MDT version 1.2.7
[2023-04-27 14:24:24,454] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Preparing for model BallStick_r1
[2023-04-27 14:24:24,506] [INFO] [mdt.models.composite] [_prepare_input_data] - No volume options to apply, using all 43 volumes.
[2023-04-27 14:24:24,506] [INFO] [mdt.lib.processing.model_fitting] [_model_fit_logging] - Fitting BallStick_r1 model
[2023-04-27 14:24:24,506] [INFO] [mdt.lib.processing.model_fitting] [_model_fit_logging] - The 4 parameters we will fit are: ['S0.s0', 'w_stick0.w', 'Stick0.theta', 'Stick0.phi']
[2023-04-27 14:24:24,506] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Saving temporary results in F:\ihMT_ExVivo\20230416_A2BrainPiece_Diffusion\ECMOCO_HYSCO\output\Tensor_ExVivo\Multishell_b1k\BallStick_r1\tmp_results.
[2023-04-27 14:24:24,634] [INFO] [mdt.lib.processing.processing_strategies] [_process_chunk] - Computations are at 0.00%, processing next 81584 voxels (81584 voxels in total, 0 processed). Time spent: 0:00:00:00, time left: ? (d:h:m:s).
[2023-04-27 14:24:24,634] [INFO] [mdt.lib.processing.model_fitting] [_process] - Starting optimization
[2023-04-27 14:24:24,638] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using MOT version 0.11.4
[2023-04-27 14:24:24,638] [INFO] [mdt.lib.processing.model_fitting] [_process] - We will use a single precision float type for the calculations.
[2023-04-27 14:24:24,638] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using device 'GPU - Intel(R) HD Graphics 520 (Intel(R) OpenCL HD Graphics)'.
[2023-04-27 14:24:24,638] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using compile flags: ('-cl-denorms-are-zero', '-cl-mad-enable', '-cl-no-signed-zeros')
[2023-04-27 14:24:24,638] [INFO] [mdt.lib.processing.model_fitting] [_process] - We will use the optimizer Powell with optimizer settings {'patience': 10}
InvalidArraySize: Array size must be at least 1: [0 x i8] [Src: d:\qb\workspace\22235\source\llvm_source\projects\llvm-spirv\lib\spirv\spirvwriter.cpp:331 T->getArrayNumElements() >= 1 ]
Process finished with exit code -1073740791 (0xC0000409)
I have two questions: (1) I am starting to use Tensor, so why it is using BallSitck_r1 model first? Is it a way to avoid this fitting and jump directly to Tensor (and other models). and (2) is this a way to debug it and see maybe where this error comes from? As far as I could observe, the main problem appears in line 371 of model_fitting.py (in the codec.encode function).
(370) kernel_data_subset = self._kernel_data.get_subset(roi_indices)
(371) x0 = self._codec.encode(self._initial_params[roi_indices], kernel_data_subset)
Thanks,
Frank
Hello MDT Team,
Thanks very much for putting this toolbox up. I have read your paper with great interest and now I am keen to integrate MDT into our pipeline to process NODDI data. However, I am running into an issue after installation.
The early checks seemed fine.
mdt-list-devices
Device 0:
CPU - Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz (Apple)
Device 1:
GPU - AMD Radeon R9 M380 Compute Engine (Apple)
and MDT launches the GUI as expected. I selected the CPU as the GPU option resulted in system crashes (Mac OSX).
objc[7979]: Class FIFinderSyncExtensionHost is implemented in both /System/Library/PrivateFrameworks/FinderKit.framework/Versions/A/FinderKit (0x7fff89d421d0) and /System/Library/PrivateFrameworks/FileProvider.framework/OverrideBundles/FinderSyncCollaborationFileProviderOverride.bundle/Contents/MacOS/FinderSyncCollaborationFileProviderOverride (0x130881dc8). One of the two will be used. Which one is undefined.
2019-03-19 16:07:03.469 Python[7979:525286] [QL] Can't get plugin bundle info at file:///Applications/GarageBand.app/Contents/Library/QuickLook/GarageBandQLGenerator.qlgenerator/
2019-03-19 16:07:03.469 Python[7979:525286] [QL] Can't get plugin bundle info at file:///Applications/GarageBand.app/Contents/Library/QuickLook/LogicXQLGenerator.qlgenerator/
[2019-03-19 16:07:32,726] [INFO] [mdt.lib.model_fitting] [get_model_fit] - Starting intermediate optimization for generating initialization point.
[2019-03-19 16:07:32,850] [INFO] [mdt.lib.model_fitting] [get_model_fit] - Starting intermediate optimization for generating initialization point.
[2019-03-19 16:07:32,945] [INFO] [mdt.lib.model_fitting] [_apply_user_provided_initialization_data] - Preparing model BallStick_r1 with the user provided initialization data.
[2019-03-19 16:07:32,970] [INFO] [mdt.lib.model_fitting] [fit_composite_model] - Not recalculating BallStick_r1 model
[2019-03-19 16:07:32,990] [INFO] [mdt.lib.model_fitting] [get_model_fit] - Finished intermediate optimization for generating initialization point.
[2019-03-19 16:07:33,355] [INFO] [mdt.lib.model_fitting] [_apply_user_provided_initialization_data] - Preparing model NODDI with the user provided initialization data.
[2019-03-19 16:07:33,397] [INFO] [mdt.lib.model_fitting] [fit_composite_model] - Using MDT version 0.20.3
[2019-03-19 16:07:33,397] [INFO] [mdt.lib.model_fitting] [fit_composite_model] - Preparing for model NODDI
[2019-03-19 16:07:33,397] [INFO] [mdt.lib.model_fitting] [fit_composite_model] - Current cascade: ['NODDI']
[2019-03-19 16:07:33,731] [INFO] [mdt.models.composite] [_prepare_input_data] - No volume options to apply, using all 64 volumes.
[2019-03-19 16:07:33,731] [INFO] [mdt.utils] [estimate_noise_std] - Trying to estimate a noise std.
[2019-03-19 16:07:33,733] [WARNING] [mdt.utils] [_compute_noise_std] - Failed to obtain a noise std for this subject. We will continue with an std of 1.
[2019-03-19 16:07:33,734] [INFO] [mdt.lib.model_fitting] [_model_fit_logging] - Fitting NODDI model
[2019-03-19 16:07:33,734] [INFO] [mdt.lib.model_fitting] [_model_fit_logging] - The 6 parameters we will fit are: ['S0.s0', 'w_ic.w', 'NODDI_IC.theta', 'NODDI_IC.phi', 'NODDI_IC.kappa', 'w_ec.w']
[2019-03-19 16:07:33,734] [INFO] [mdt.lib.model_fitting] [fit_composite_model] - Saving temporary results in /Users/elijahmak_imac/Library/Mobile Documents/com~apple~CloudDocs/noddi_test/18411_dti/output/b0_brain_mask/NODDI/tmp_results.
[2019-03-19 16:07:33,796] [INFO] [mdt.lib.processing_strategies] [_process_chunk] - Computations are at 0.00%, processing next 100000 voxels (165818 voxels in total, 0 processed). Time spent: 0:00:00:00, time left: ? (d:h:m:s).
[2019-03-19 16:07:33,826] [INFO] [mdt.lib.processing_strategies] [_process] - Starting optimization
[2019-03-19 16:07:33,826] [INFO] [mdt.lib.processing_strategies] [_process] - Using MOT version 0.9.1
[2019-03-19 16:07:33,826] [INFO] [mdt.lib.processing_strategies] [_process] - We will use a single precision float type for the calculations.
[2019-03-19 16:07:33,826] [INFO] [mdt.lib.processing_strategies] [_process] - Using device 'CPU - Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz (Apple)'.
[2019-03-19 16:07:33,826] [INFO] [mdt.lib.processing_strategies] [_process] - Using compile flags: ['-cl-denorms-are-zero', '-cl-mad-enable', '-cl-no-signed-zeros']
[2019-03-19 16:07:33,827] [INFO] [mdt.lib.processing_strategies] [_process] - We will use the optimizer Powell with default settings.
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mdt/gui/utils.py", line 84, in _decorator
response = dec_func(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mdt/gui/model_fit/tabs/fit_model_tab.py", line 614, in run
mdt.fit_model(*self._args, **self._kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mdt/__init__.py", line 197, in fit_model
inits = get_optimization_inits(model_name, input_data, output_folder, cl_device_ind=cl_device_ind)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mdt/__init__.py", line 89, in get_optimization_inits
return get_optimization_inits(model_name, input_data, output_folder, cl_device_ind=cl_device_ind)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mdt/lib/model_fitting.py", line 162, in get_optimization_inits
return get_init_data(model_name)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mdt/lib/model_fitting.py", line 102, in get_init_data
noddi_results = get_model_fit('NODDI')
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mdt/lib/model_fitting.py", line 70, in get_model_fit
initialization_data={'inits': get_init_data(model_name)}).run()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mdt/lib/model_fitting.py", line 336, in run
_, maps = self._run(self._model, self._recalculate, self._only_recalculate_last)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mdt/lib/model_fitting.py", line 382, in _run
apply_user_provided_initialization=not _in_recursion)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mdt/lib/model_fitting.py", line 393, in _run_composite_model
optimizer_options=self._optimizer_options)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mdt/lib/model_fitting.py", line 465, in fit_composite_model
return processing_strategy.process(worker)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mdt/lib/processing_strategies.py", line 75, in process
self._process_chunk(processor, chunks)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mdt/lib/processing_strategies.py", line 120, in _process_chunk
process()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mdt/lib/processing_strategies.py", line 117, in process
processor.process(chunk, next_indices=next_chunk)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mdt/lib/processing_strategies.py", line 293, in process
self._process(roi_indices, next_indices=next_indices)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mdt/lib/processing_strategies.py", line 467, in _process
x0 = codec.encode(self._model.get_initial_parameters(), self._model.get_kernel_data())
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mdt/model_building/utils.py", line 220, in encode
parameters, kernel_data, cl_runtime_info=cl_runtime_info)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mdt/model_building/utils.py", line 257, in _transform_parameters
cl_named_func.evaluate(kernel_data, parameters.shape[0], cl_runtime_info=cl_runtime_info)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mot/lib/cl_function.py", line 248, in evaluate
use_local_reduction=use_local_reduction, cl_runtime_info=cl_runtime_info)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mot/lib/cl_function.py", line 608, in apply_cl_function
cl_function, kernel_data, cl_runtime_info.double_precision, use_local_reduction)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mot/lib/cl_function.py", line 653, in __init__
self._kernel = self._build_kernel(self._get_kernel_source(), compile_flags)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mot/lib/cl_function.py", line 712, in _build_kernel
return cl.Program(self._cl_context, kernel_source).build(' '.join(compile_flags))
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyopencl/__init__.py", line 510, in build
options_bytes=options_bytes, source=self._source)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyopencl/__init__.py", line 554, in _build_and_catch_errors
raise err
pyopencl._cl.RuntimeError: clBuildProgram failed: BUILD_PROGRAM_FAILURE - clBuildProgram failed: BUILD_PROGRAM_FAILURE - clBuildProgram failed: BUILD_PROGRAM_FAILURE
Build on <pyopencl.Device 'Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz' on 'Apple' at 0xffffffff>:
(options: -cl-denorms-are-zero -cl-mad-enable -cl-no-signed-zeros -I /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyopencl/cl)
(source saved as /var/folders/t9/6js6fq5x7dnc63vlsx0y_rv00000gn/T/tmpho7wok4g.cl)
Sorry, I am no expert in Python so I cannot make any sense of any that, but I suspect it may be something to do with PYOPEN.CL ..? I would really appreciate your help or any suggestions that I might be able execute/test.
Thanks!
Hi there,
I'm trying to start mdt-gui
, but it crashes on start with the following Qt-related trace:
$ mdt-gui -d .
QObject::connect: Cannot connect QLineEdit::textChanged(QString) to (null)::directory_updated(QString)
Traceback (most recent call last):
File "/share/software/user/open/py-mdt/0.10.6_py36/bin/mdt-gui", line 11, in <module>
sys.exit(GUI.console_script())
File "/share/software/user/open/py-mdt/0.10.6_py36/lib/python3.6/site-packages/mdt/shell_utils.py", line 44, in console_script
cls().start(sys.argv[1:])
File "/share/software/user/open/py-mdt/0.10.6_py36/lib/python3.6/site-packages/mdt/shell_utils.py", line 63, in start
self.run(args, {})
File "/share/software/user/open/py-mdt/0.10.6_py36/lib/python3.6/site-packages/mdt/cli_scripts/mdt_gui.py", line 40, in run
start_gui(cwd)
File "/share/software/user/open/py-mdt/0.10.6_py36/lib/python3.6/site-packages/mdt/gui/model_fit/qt_main.py", line 273, in start_gui
composite_model_gui = MDTGUISingleModel(state, computations_thread)
File "/share/software/user/open/py-mdt/0.10.6_py36/lib/python3.6/site-packages/mdt/gui/model_fit/qt_main.py", line 88, in __init__
self.view_results_tab.setupUi(self.viewResultsTab)
File "/share/software/user/open/py-mdt/0.10.6_py36/lib/python3.6/site-packages/mdt/gui/model_fit/tabs/view_results_tab.py", line 33, in setupUi
self.selectedFolderText.textChanged.connect(self.directory_updated)
TypeError: connect() failed between textChanged(QString) and directory_updated()
Does that ring any bell?
PyQt5 is installed, for Qt version 5.9.1.
Thanks!
Hi again Robbert,
Many thanks for your help with the set up of MDT on my cluster some time ago. I have been obtaining some promising findings from the ODI maps and now I wish to study the free water maps.
Can I confirm that the output of interest is "w_ic.w.nii.gz" ?
On a separate note, would you recommend using the NODDI-DTI model for single-shell data with b val 1000s)?
Thank you!
Best Wishes,
Elijah
Dear Robbert,
I am composing a new model, by adding the description of my model in a file .py in the folder .mdt/1.2.6/components/user/composite_models. I need to know some things:
once I did this, what else have I to do to make my new model available?
In modeling the signal I would like to introduce the T1 or T2 decay: I found that in the model TimeDependentActiveAx, (maybe) such dependence appears in the function "ExpT1DecTM_simple": where is the description of such a function?
In the file free.py, in the folder .mdt/1.2.6/components/standard/parameters, there is the description of T1:
class T1(FreeParameterTemplate):
init_value = 0.05
lower_bound = 1e-5
upper_bound = 4.0
parameter_transform = ScaleTransform(1e4)
How does the ScaleTransform mean? Which is the unit, s*10(-4))? For example, a T1 of 500 ms is in the described range?
Thanks a lot for your precious answers.
Dear Robbert
I am trying to fit a signal with the following model
S0ExpT1DecTRNODDImodel ...
Unfortunately, T1 parameter is not fitted but results fixed to its initialization value ... Any suggestion?
Thanks a lot
Best
Stefania Oliviero
Hi, I am using MDT 1.2.7. With this build, I am unable to list the models using the mdt-list-models CLI function. When running from the command line, I receive this error:
If I run mdt-list-devices or mdt-create-protocol, I do not have an issue which suggests to me that there is an issue specific to the list models function.
Similarly when I try to run mdt-model-fit with NODDI, I receive this error and it also fails to output which models I should be choosing from:
This same issue occurs when using the example data with the example code from the documentation:
When I run the python function, I receive an empty list as shown below. My environment is able to recognize that the functions from MDT are loaded in, but the mdt.get_models_list() returns an empty list:
Please let me know if you have any suggestions.
We've been using MDT on several workstations with GPUs in them happily. But, we now are shifting over to a cluster setup that does not have GPUs but does have nodes with 144 CPU cores.
Singularity> mdt-list-devices -l
Device 0:
===========================================================================
<pyopencl.Platform 'Portable Computing Language' at 0x7f43d8000020>
===========================================================================
extensions: cl_khr_icd
name: Portable Computing Language
profile: FULL_PROFILE
vendor: The pocl project
version: OpenCL 1.2 pocl 1.1 None+Asserts, LLVM 6.0.0, SPIR, SLEEF, DISTRO, POCL_DEBUG
---------------------------------------------------------------------------
<pyopencl.Device 'pthread-Intel(R) Xeon(R) CPU E7-8895 v3 @ 2.60GHz' on 'Portable Computing Language' at 0x2e20aa0>
---------------------------------------------------------------------------
...
max compute units: 144
Yet, when running here, I sit with ~1 CPU fully loaded and that's about it when running mdt-model-fit. I'm polling the CPU usage and the highest I've seen is 113%. My fans aren't even getting a workout.
I'd thought that with OpenCL it would fork this out to all the CPU cores I have and yet I'm seemingly running only single-threaded. Any ideas?
Craig
After following the installation instructions in a python virtual environment:
sudo apt install python3 python3-pip python3-pyopencl \
python3-numpy python3-nibabel python3-pyqt5 \
python3-matplotlib python3-six python3-yaml \
python3-argcomplete libpng-dev libfreetype6-dev libxft-dev
sudo pip3 install mdt
mdt-gui
command returns the following in terminal:
Traceback (most recent call last):
File "/usr/local/bin/mdt-gui", line 7, in <module>
from mdt.cli_scripts.mdt_gui import GUI
File "/usr/local/lib/python3.4/dist-packages/mdt/__init__.py", line 16, in <module>
from mdt.configuration import get_logging_configuration_dict
File "/usr/local/lib/python3.4/dist-packages/mdt/configuration.py", line 31, in <module>
from mot.factory import get_optimizer_by_name, get_sampler_by_name
File "/usr/local/lib/python3.4/dist-packages/mot/factory.py", line 1, in <module>
from mot.cl_routines.optimizing.grid_search import GridSearch
File "/usr/local/lib/python3.4/dist-packages/mot/cl_routines/optimizing/grid_search.py", line 5, in <module>
from ...cl_routines.optimizing.base import AbstractParallelOptimizer, AbstractParallelOptimizerWorker
File "/usr/local/lib/python3.4/dist-packages/mot/cl_routines/optimizing/base.py", line 4, in <module>
from mot.cl_routines.mapping.error_measures import ErrorMeasures
File "/usr/local/lib/python3.4/dist-packages/mot/cl_routines/mapping/error_measures.py", line 2, in <module>
from ...cl_routines.base import CLRoutine
File "/usr/local/lib/python3.4/dist-packages/mot/cl_routines/base.py", line 1, in <module>
from mot import configuration
File "/usr/local/lib/python3.4/dist-packages/mot/configuration.py", line 32, in <module>
'cl_environments': CLEnvironmentFactory.smart_device_selection(),
File "/usr/local/lib/python3.4/dist-packages/mot/cl_environments.py", line 255, in smart_device_selection
cl_environments = CLEnvironmentFactory.all_devices()
File "/usr/local/lib/python3.4/dist-packages/mot/cl_environments.py", line 234, in all_devices
if device_supports_double(device):
File "/usr/local/lib/python3.4/dist-packages/mot/utils.py", line 45, in device_supports_double
return cl_device.get_info(cl.device_info.DOUBLE_FP_CONFIG) == 63
pyopencl.LogicError: clGetDeviceInfo failed: invalid value
Good morning,
I am using MDT to fit synthetic signals coming from a Monte Carlo diffusion simulation in synthetic brain tissues. I am successfully fitting the signals when using acquisition protocols with 300 or 600 acquisitions, but, unfortunately, I am getting the following error, when using sequences with 2000 or 3000 acquisitions. I don't understand if the problem concerns the requested RAM or the processor: in both the cases of 300 and 600 acquisitions the RAM used is less than 3 GB, while the available RAM in all the cases is more than 50 GB.
Thanks in advance for any help.
[soliviero@titanic connectoma300_plus]$ mdt-model-fit AxCaliber B_p00_g70.nii connectoma300_plus.prtcl MDT_mask_scaled.nii -n 0.05 --method Levenberg-Marquardt -o outputLM/B_p00_g70 --cl-device-ind 0
/usr/local/lib/python3.6/site-packages/mdt/protocols.py:627: RuntimeWarning: invalid value encountered in true_divide
(protocol['Delta'] - (protocol['delta'] / 3.0))))
[2021-02-25 10:55:12,601] [INFO] [mdt.lib.processing.model_fitting] [get_model_fit] - Starting intermediate optimization for generating initialization point.
[2021-02-25 10:55:12,755] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Using MDT version 1.2.6
[2021-02-25 10:55:12,755] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Preparing for model BallStick_r1
[2021-02-25 10:55:12,776] [INFO] [mdt.models.composite] [_prepare_input_data] - No volume options to apply, using all 3913 volumes.
[2021-02-25 10:55:12,776] [INFO] [mdt.lib.processing.model_fitting] [_model_fit_logging] - Fitting BallStick_r1 model
[2021-02-25 10:55:12,776] [INFO] [mdt.lib.processing.model_fitting] [_model_fit_logging] - The 4 parameters we will fit are: ['S0.s0', 'w_stick0.w', 'Stick0.theta', 'Stick0.phi']
[2021-02-25 10:55:12,776] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Saving temporary results in outputLM/B_p00_g70/BallStick_r1/tmp_results.
[2021-02-25 10:55:12,936] [INFO] [mdt.lib.processing.processing_strategies] [_process_chunk] - Computations are at 0.00%, processing next 16 voxels (16 voxels in total, 0 processed). Time spent: 0:00:00:00, time left: ? (d:h:m:s).
[2021-02-25 10:55:12,936] [INFO] [mdt.lib.processing.model_fitting] [_process] - Starting optimization
[2021-02-25 10:55:12,936] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using MOT version 0.11.3
[2021-02-25 10:55:12,936] [INFO] [mdt.lib.processing.model_fitting] [_process] - We will use a single precision float type for the calculations.
[2021-02-25 10:55:12,937] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using device 'GPU - TITAN V (NVIDIA CUDA)'.
[2021-02-25 10:55:12,937] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using compile flags: ('-cl-denorms-are-zero', '-cl-mad-enable', '-cl-no-signed-zeros')
[2021-02-25 10:55:12,937] [INFO] [mdt.lib.processing.model_fitting] [_process] - We will use the optimizer Levenberg-Marquardt with default settings.
Traceback (most recent call last):
File "/usr/local/bin/mdt-model-fit", line 8, in
sys.exit(ModelFit.console_script())
File "/usr/local/lib/python3.6/site-packages/mdt/lib/shell_utils.py", line 47, in console_script
cls().start(sys.argv[1:])
File "/usr/local/lib/python3.6/site-packages/mdt/lib/shell_utils.py", line 66, in start
self.run(args, {})
File "/usr/local/lib/python3.6/site-packages/mdt/cli_scripts/mdt_model_fit.py", line 161, in run
fit_model()
File "/usr/local/lib/python3.6/site-packages/mdt/cli_scripts/mdt_model_fit.py", line 155, in fit_model
use_cascaded_inits=args.use_cascaded_inits)
File "/usr/local/lib/python3.6/site-packages/mdt/init.py", line 191, in fit_model
double_precision=double_precision)
File "/usr/local/lib/python3.6/site-packages/mdt/init.py", line 99, in get_optimization_inits
double_precision=double_precision)
File "/usr/local/lib/python3.6/site-packages/mdt/lib/processing/model_fitting.py", line 177, in get_optimization_inits
return get_init_data(model_name)
File "/usr/local/lib/python3.6/site-packages/mdt/lib/processing/model_fitting.py", line 161, in get_init_data
fit_results = get_model_fit('BallStick_r1')
File "/usr/local/lib/python3.6/site-packages/mdt/lib/processing/model_fitting.py", line 86, in get_model_fit
cl_device_ind=cl_device_ind, initialization_data={'inits': inits})
File "/usr/local/lib/python3.6/site-packages/mdt/init.py", line 206, in fit_model
optimizer_options=optimizer_options)
File "/usr/local/lib/python3.6/site-packages/mdt/lib/processing/model_fitting.py", line 306, in fit_composite_model
return processing_strategy.process(worker)
File "/usr/local/lib/python3.6/site-packages/mdt/lib/processing/processing_strategies.py", line 106, in process
self._process_chunk(processor, chunks)
File "/usr/local/lib/python3.6/site-packages/mdt/lib/processing/processing_strategies.py", line 151, in _process_chunk
process()
File "/usr/local/lib/python3.6/site-packages/mdt/lib/processing/processing_strategies.py", line 148, in process
processor.process(chunk, next_indices=next_chunk)
File "/usr/local/lib/python3.6/site-packages/mdt/lib/processing/processing_strategies.py", line 275, in process
self._process(roi_indices, next_indices=next_indices)
File "/usr/local/lib/python3.6/site-packages/mdt/lib/processing/model_fitting.py", line 380, in _process
options=self._optimizer_options)
File "/usr/local/lib/python3.6/site-packages/mot/optimize/init.py", line 123, in minimize
constraints_func=constraints_func, data=data, options=options)
File "/usr/local/lib/python3.6/site-packages/mot/optimize/init.py", line 532, in _minimize_levenberg_marquardt
cl_runtime_info=cl_runtime_info)
File "/usr/local/lib/python3.6/site-packages/mot/lib/cl_function.py", line 362, in evaluate
events = processor.process(wait_for=wait_for)
File "/usr/local/lib/python3.6/site-packages/mot/lib/cl_processors.py", line 90, in process
events.update(worker.process(wait_for=wait_for))
File "/usr/local/lib/python3.6/site-packages/mot/lib/cl_processors.py", line 144, in process
wait_for=wait_for)
File "/usr/local/lib64/python3.6/site-packages/pyopencl/init.py", line 861, in kernel_call
return self._enqueue(self, queue, global_size, local_size, *args, **kwargs)
File "", line 189, in enqueue_knl_kernel_lmmin
pyopencl._cl.RuntimeError: clEnqueueNDRangeKernel failed: OUT_OF_RESOURCES
[soliviero@titanic connectoma300_plus]$ mdt-model-fit AxCaliber B_p00_g70.nii connectoma300_plus.prtcl MDT_mask_scaled.nii -n 0.05 --method Levenberg-Marquardt -o outputLM/B_p00_g70 --cl-device-ind 0
/usr/local/lib/python3.6/site-packages/mdt/protocols.py:627: RuntimeWarning: invalid value encountered in true_divide
(protocol['Delta'] - (protocol['delta'] / 3.0))))
[2021-02-26 12:10:00,522] [INFO] [mdt.lib.processing.model_fitting] [get_model_fit] - Starting intermediate optimization for generating initialization point.
[2021-02-26 12:10:00,678] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Using MDT version 1.2.6
[2021-02-26 12:10:00,696] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Preparing for model BallStick_r1
[2021-02-26 12:10:00,702] [INFO] [mdt.models.composite] [_prepare_input_data] - No volume options to apply, using all 3913 volumes.
[2021-02-26 12:10:00,703] [INFO] [mdt.lib.processing.model_fitting] [_model_fit_logging] - Fitting BallStick_r1 model
[2021-02-26 12:10:00,703] [INFO] [mdt.lib.processing.model_fitting] [_model_fit_logging] - The 4 parameters we will fit are: ['S0.s0', 'w_stick0.w', 'Stick0.theta', 'Stick0.phi']
[2021-02-26 12:10:00,703] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Saving temporary results in outputLM/B_p00_g70/BallStick_r1/tmp_results.
[2021-02-26 12:10:00,865] [INFO] [mdt.lib.processing.processing_strategies] [_process_chunk] - Computations are at 0.00%, processing next 16 voxels (16 voxels in total, 0 processed). Time spent: 0:00:00:00, time left: ? (d:h:m:s).
[2021-02-26 12:10:00,865] [INFO] [mdt.lib.processing.model_fitting] [_process] - Starting optimization
[2021-02-26 12:10:00,866] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using MOT version 0.11.3
[2021-02-26 12:10:00,866] [INFO] [mdt.lib.processing.model_fitting] [_process] - We will use a single precision float type for the calculations.
[2021-02-26 12:10:00,866] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using device 'GPU - TITAN V (NVIDIA CUDA)'.
[2021-02-26 12:10:00,866] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using compile flags: ('-cl-denorms-are-zero', '-cl-mad-enable', '-cl-no-signed-zeros')
[2021-02-26 12:10:00,866] [INFO] [mdt.lib.processing.model_fitting] [_process] - We will use the optimizer Levenberg-Marquardt with default settings.
Traceback (most recent call last):
File "/usr/local/bin/mdt-model-fit", line 8, in
sys.exit(ModelFit.console_script())
File "/usr/local/lib/python3.6/site-packages/mdt/lib/shell_utils.py", line 47, in console_script
cls().start(sys.argv[1:])
File "/usr/local/lib/python3.6/site-packages/mdt/lib/shell_utils.py", line 66, in start
self.run(args, {})
File "/usr/local/lib/python3.6/site-packages/mdt/cli_scripts/mdt_model_fit.py", line 161, in run
fit_model()
File "/usr/local/lib/python3.6/site-packages/mdt/cli_scripts/mdt_model_fit.py", line 155, in fit_model
use_cascaded_inits=args.use_cascaded_inits)
File "/usr/local/lib/python3.6/site-packages/mdt/init.py", line 191, in fit_model
double_precision=double_precision)
File "/usr/local/lib/python3.6/site-packages/mdt/init.py", line 99, in get_optimization_inits
double_precision=double_precision)
File "/usr/local/lib/python3.6/site-packages/mdt/lib/processing/model_fitting.py", line 177, in get_optimization_inits
return get_init_data(model_name)
File "/usr/local/lib/python3.6/site-packages/mdt/lib/processing/model_fitting.py", line 161, in get_init_data
fit_results = get_model_fit('BallStick_r1')
File "/usr/local/lib/python3.6/site-packages/mdt/lib/processing/model_fitting.py", line 86, in get_model_fit
cl_device_ind=cl_device_ind, initialization_data={'inits': inits})
File "/usr/local/lib/python3.6/site-packages/mdt/init.py", line 206, in fit_model
optimizer_options=optimizer_options)
File "/usr/local/lib/python3.6/site-packages/mdt/lib/processing/model_fitting.py", line 306, in fit_composite_model
return processing_strategy.process(worker)
File "/usr/local/lib/python3.6/site-packages/mdt/lib/processing/processing_strategies.py", line 106, in process
self._process_chunk(processor, chunks)
File "/usr/local/lib/python3.6/site-packages/mdt/lib/processing/processing_strategies.py", line 151, in _process_chunk
process()
File "/usr/local/lib/python3.6/site-packages/mdt/lib/processing/processing_strategies.py", line 148, in process
processor.process(chunk, next_indices=next_chunk)
File "/usr/local/lib/python3.6/site-packages/mdt/lib/processing/processing_strategies.py", line 275, in process
self._process(roi_indices, next_indices=next_indices)
File "/usr/local/lib/python3.6/site-packages/mdt/lib/processing/model_fitting.py", line 380, in _process
options=self._optimizer_options)
File "/usr/local/lib/python3.6/site-packages/mot/optimize/init.py", line 123, in minimize
constraints_func=constraints_func, data=data, options=options)
File "/usr/local/lib/python3.6/site-packages/mot/optimize/init.py", line 532, in _minimize_levenberg_marquardt
cl_runtime_info=cl_runtime_info)
File "/usr/local/lib/python3.6/site-packages/mot/lib/cl_function.py", line 362, in evaluate
events = processor.process(wait_for=wait_for)
File "/usr/local/lib/python3.6/site-packages/mot/lib/cl_processors.py", line 90, in process
events.update(worker.process(wait_for=wait_for))
File "/usr/local/lib/python3.6/site-packages/mot/lib/cl_processors.py", line 144, in process
wait_for=wait_for)
File "/usr/local/lib64/python3.6/site-packages/pyopencl/init.py", line 861, in kernel_call
return self._enqueue(self, queue, global_size, local_size, *args, **kwargs)
File "", line 189, in enqueue_knl_kernel_lmmin
pyopencl._cl.RuntimeError: clEnqueueNDRangeKernel failed: OUT_OF_RESOURCES
[soliviero@titanic connectoma300_plus]$
To whom may concern,
glad to contact. I have a question. I am running a code that it used to work months ago but nowadays it is not working anymore. To be more precise, when mdt.fit_model(...) is running, the following console output shows up:
C:\Users\lagos\Anaconda3\python.exe C:/Users/lagos/Desktop/Toolbox/mt-scripts/mt_scripts/franc/relaxometry/Experimental_Studies/20201207_InSilico_R2sAngularDependency_MyelinStudy.py
pixdim[0] (qfac) should be 1 (default) or -1; setting qfac to 1
pixdim[0] (qfac) should be 1 (default) or -1; setting qfac to 1
pixdim[0] (qfac) should be 1 (default) or -1; setting qfac to 1
pixdim[0] (qfac) should be 1 (default) or -1; setting qfac to 1
[2020-12-07 16:06:05,182] [INFO] [mdt] [fit_model] - Preparing S0-QuadTE with the cascaded initializations.
[2020-12-07 16:06:05,185] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Using MDT version 1.2.5
[2020-12-07 16:06:05,185] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Preparing for model S0-QuadTE
[2020-12-07 16:06:05,372] [INFO] [mdt.models.composite] [_prepare_input_data] - No volume options to apply, using all 16 volumes.
[2020-12-07 16:06:05,373] [INFO] [mdt.lib.processing.model_fitting] [_model_fit_logging] - Fitting S0-QuadTE model
[2020-12-07 16:06:05,373] [INFO] [mdt.lib.processing.model_fitting] [_model_fit_logging] - The 3 parameters we will fit are: ['S0.s0', 'QuadTE.beta1', 'QuadTE.beta2']
[2020-12-07 16:06:05,373] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Saving temporary results in G:\Projects\Optic_Chiasm_R2_Dispersion\Simulation_Dataset\Data_InSilico\output\InSilico_NoDispNoMye_AngMeas1\MaxTE16_2\S0-QuadTE\tmp_results.
[2020-12-07 16:06:05,726] [INFO] [mdt.lib.processing.processing_strategies] [_process_chunk] - Computations are at 0.00%, processing next 100000 voxels (335000 voxels in total, 0 processed). Time spent: 0:00:00:00, time left: ? (d:h:m:s).
[2020-12-07 16:06:05,727] [INFO] [mdt.lib.processing.model_fitting] [_process] - Starting optimization
[2020-12-07 16:06:05,727] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using MOT version 0.11.3
[2020-12-07 16:06:05,727] [INFO] [mdt.lib.processing.model_fitting] [_process] - We will use a double precision float type for the calculations.
[2020-12-07 16:06:05,727] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using device 'GPU - Intel(R) HD Graphics 520 (Intel(R) OpenCL HD Graphics)'.
[2020-12-07 16:06:05,727] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using compile flags: ('-cl-denorms-are-zero', '-cl-mad-enable', '-cl-no-signed-zeros')
[2020-12-07 16:06:05,728] [INFO] [mdt.lib.processing.model_fitting] [_process] - We will use the optimizer Powell with optimizer settings {'patience': 10}
InvalidArraySize: Array size must be at least 1: [0 x i8] [Src: d:\qb\workspace\22235\source\llvm_source\projects\llvm-spirv\lib\spirv\spirvwriter.cpp:332 T->getArrayNumElements() >= 1 ]
This last warning is weird because the dataset (mdt.load_input_data) is being loaded properly (all the volumes, the protocol, the extra protocol and noise_std = value). I would like to ask where the problem can be.
Best regards,
Francisco
MDT development team,
Hello, I am a graduate student using this toolbox for processing diffusion MRI. When updating to the latest version, the python api function:
'mdt.get_optimization_inits' is now only accepting some of the model_name arguments, and not others.
example below:
init = mdt.get_optimization_inits(model_name='NODDI_DTI', input_data, output_folder)
raises the following error:
Traceback (most recent call last):
File ".../CondaEnvironments/CooperdMRIPipeline2/lib/python3.6/site-packages/mdt/lib/components.py", line 334, in get_model
return component_library.get_component('composite_models', model_name)
File ".../CondaEnvironments/CooperdMRIPipeline2/lib/python3.6/site-packages/mdt/lib/components.py", line 100, in get_component
raise ValueError('Can not find a component of type "{}" with name "{}"'.format(component_type, name))
ValueError: Can not find a component of type "composite_models" with name "NODDI_DTI"
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File ".../CondaEnvironments/CooperdMRIPipeline2/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3296, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
cInit = mdt.get_optimization_inits('NODDI_DTI', cInput, sSaveName)
File ".../CondaEnvironments/CooperdMRIPipeline2/lib/python3.6/site-packages/mdt/init.py", line 88, in get_optimization_inits
return get_optimization_inits(model_name, input_data, output_folder, cl_device_ind=cl_device_ind)
File ".../CondaEnvironments/CooperdMRIPipeline2/lib/python3.6/site-packages/mdt/lib/model_fitting.py", line 161, in get_optimization_inits
return get_init_data(model_name)
File ".../CondaEnvironments/CooperdMRIPipeline2/lib/python3.6/site-packages/mdt/lib/model_fitting.py", line 75, in get_init_data
free_parameters = get_model(model_name)().get_free_param_names()
File ".../CondaEnvironments/CooperdMRIPipeline2/lib/python3.6/site-packages/mdt/lib/components.py", line 336, in get_model
raise ValueError('The model with the name "{}" could not be found.'.format(model_name))
ValueError: The model with the name "NODDI_DTI" could not be found.
The following model types I have tried raise the same error:
model_name='NODDI', 'BinghamNODDI', or 'NODDI-DTI'
These models run normally:
model_name= 'NODDIDA'
Any help you could give would be greatly appreciated! Thank you,
-Cooper Mellema
Hi, I love this toolbox so far - especially the speed and flexibility! I've noticed that MDT tends to down-weight CSF voxels in the ODI images. In general, this is fine, but I'm working with older adults who tend to have larger ventricles. As a result, MDT occasionally classifies their subcortical structures (e.g., caudate) as CSF and knocks out the signal from these regions.
Is there a way to get the exact same results as the NODDI Matlab toolbox?
Thanks in advance & thanks for contributing such an amazing resource,
John
Hello,
I am trying to get MDT running on my local machine, using an HCP subject as a test case with NODDI (Cascade). It seems to get through the S0 phase ok, but is failing during later optimization, with pyopencl.LogicError: clFinish failed: invalid command queue
Any ideas? (Great toolbox by the way, eager to start using it!)
The full log is below, let me know if there is any other info that might be helpful.
mdt-batch-fit . 'NODDI (Cascade)'
[2018-03-31 00:17:54,344] [INFO] [mdt] [batch_fit] - Using MDT version 0.10.9
[2018-03-31 00:17:54,344] [INFO] [mdt] [batch_fit] - Using batch profile: HCP WU-Minn
[2018-03-31 00:17:54,388] [INFO] [mdt] [batch_fit] - Fitting models: ['NODDI (Cascade)']
[2018-03-31 00:17:54,388] [INFO] [mdt] [batch_fit] - Subjects found: 1
[2018-03-31 00:17:54,388] [INFO] [mdt] [batch_fit] - Subjects to process: 1
[2018-03-31 00:17:54,411] [INFO] [mdt.model_fitting] [__call__] - Going to process subject 100307, (1 of 1, we are at 0.00%)
[2018-03-31 00:17:54,904] [INFO] [mdt.model_fitting] [__call__] - Loading the data (DWI, mask and protocol) of subject 100307
[2018-03-31 00:18:14,224] [INFO] [mdt.model_fitting] [__call__] - Going to fit model NODDI (Cascade) on subject 100307
[2018-03-31 00:18:14,561] [INFO] [mdt.utils] [configure_per_model_logging] - Started appending to the per model log file
[2018-03-31 00:18:14,561] [INFO] [mdt.model_fitting] [run] - Using MDT version 0.10.9
[2018-03-31 00:18:14,561] [INFO] [mdt.model_fitting] [run] - Preparing for model S0
[2018-03-31 00:18:14,562] [INFO] [mdt.model_fitting] [run] - Current cascade: ['NODDI (Cascade)', 'BallStick_r1 (Cascade)', 'S0']
[2018-03-31 00:18:14,563] [INFO] [mdt.models.composite] [_prepare_input_data] - For this model, S0, we will use a subset of the protocol and DWI.
[2018-03-31 00:18:14,563] [INFO] [mdt.models.composite] [_prepare_input_data] - Using 18 out of 288 volumes, indices: [0 16 32 48 64 80 95 112 128 144 160 176 191 208 224 240 256 272]
[2018-03-31 00:18:14,700] [INFO] [mdt.utils] [estimate_noise_std] - Trying to estimate a noise std.
[2018-03-31 00:18:15,081] [INFO] [mdt.utils] [estimate] - Found global noise std 223.0515899658203 using estimator AllUnweightedVolumes.
[2018-03-31 00:18:15,085] [INFO] [mdt.model_fitting] [_logging] - Fitting S0 model
[2018-03-31 00:18:15,085] [INFO] [mdt.model_fitting] [_logging] - The parameters we will fit are: ['S0.s0']
[2018-03-31 00:18:15,085] [INFO] [mdt.model_fitting] [run] - Saving temporary results in /home/ROBARTS/alik/graham/home/test_mdt_output/100307/S0/tmp_results.
[2018-03-31 00:18:17,605] [INFO] [mdt.processing_strategies] [_process_chunk] - Computations are at 0.00%, processing next 100000 voxels (741046 voxels in total, 0 processed). Time spent: 0:00:00:00, time left: ? (d:h:m:s).
[2018-03-31 00:18:17,905] [INFO] [mot.cl_routines.optimizing.base] [minimize] - Entered optimization routine.
[2018-03-31 00:18:17,905] [INFO] [mot.cl_routines.optimizing.base] [minimize] - Using MOT version 0.3.12
[2018-03-31 00:18:17,905] [INFO] [mot.cl_routines.optimizing.base] [minimize] - We will use a single precision float type for the calculations.
[2018-03-31 00:18:17,905] [INFO] [mot.cl_routines.optimizing.base] [minimize] - Using device 'GPU - Quadro K2200 (NVIDIA CUDA)'.
[2018-03-31 00:18:17,906] [INFO] [mot.cl_routines.optimizing.base] [minimize] - Using compile flags: ['-cl-mad-enable', '-cl-single-precision-constant', '-cl-no-signed-zeros', '-cl-denorms-are-zero']
[2018-03-31 00:18:17,906] [INFO] [mot.cl_routines.optimizing.base] [minimize] - We will use the optimizer Powell with optimizer settings {'glimit': 100.0, 'patience': 2, 'reset_method': 'EXTRAPOLATED_POINT', 'bracket_gold': 1.618034}
[2018-03-31 00:18:17,906] [INFO] [mot.cl_routines.optimizing.base] [minimize] - Starting optimization preliminaries
[2018-03-31 00:18:18,052] [INFO] [mot.cl_routines.optimizing.base] [minimize] - Finished optimization preliminaries
[2018-03-31 00:18:18,052] [INFO] [mot.cl_routines.optimizing.base] [minimize] - Starting optimization
[2018-03-31 00:18:20,447] [INFO] [mot.cl_routines.optimizing.base] [minimize] - Finished optimization
[2018-03-31 00:18:28,103] [INFO] [mdt.processing_strategies] [_process_chunk] - Computations are at 13.49%, processing next 100000 voxels (741046 voxels in total, 100000 processed). Time spent: 0:00:00:10, time left: 0:00:01:07 (d:h:m:s).
[2018-03-31 00:18:33,167] [INFO] [mdt.processing_strategies] [_process_chunk] - Computations are at 26.99%, processing next 100000 voxels (741046 voxels in total, 200000 processed). Time spent: 0:00:00:15, time left: 0:00:00:42 (d:h:m:s).
[2018-03-31 00:18:37,869] [INFO] [mdt.processing_strategies] [_process_chunk] - Computations are at 40.48%, processing next 100000 voxels (741046 voxels in total, 300000 processed). Time spent: 0:00:00:20, time left: 0:00:00:29 (d:h:m:s).
[2018-03-31 00:18:42,648] [INFO] [mdt.processing_strategies] [_process_chunk] - Computations are at 53.98%, processing next 100000 voxels (741046 voxels in total, 400000 processed). Time spent: 0:00:00:25, time left: 0:00:00:21 (d:h:m:s).
[2018-03-31 00:18:47,598] [INFO] [mdt.processing_strategies] [_process_chunk] - Computations are at 67.47%, processing next 100000 voxels (741046 voxels in total, 500000 processed). Time spent: 0:00:00:29, time left: 0:00:00:14 (d:h:m:s).
[2018-03-31 00:18:52,488] [INFO] [mdt.processing_strategies] [_process_chunk] - Computations are at 80.97%, processing next 100000 voxels (741046 voxels in total, 600000 processed). Time spent: 0:00:00:34, time left: 0:00:00:08 (d:h:m:s).
[2018-03-31 00:18:57,892] [INFO] [mdt.processing_strategies] [_process_chunk] - Computations are at 94.46%, processing next 41046 voxels (741046 voxels in total, 700000 processed). Time spent: 0:00:00:40, time left: 0:00:00:02 (d:h:m:s).
[2018-03-31 00:19:01,662] [INFO] [mdt.processing_strategies] [process] - Computed all voxels, now creating nifti's
[2018-03-31 00:19:20,289] [INFO] [mdt.model_fitting] [_logging] - Fitted S0 model with runtime 00:01:05 (h:m:s).
[2018-03-31 00:19:20,292] [INFO] [mdt.utils] [configure_per_model_logging] - Stopped appending to the per model log file
[2018-03-31 00:19:22,701] [INFO] [mdt.utils] [configure_per_model_logging] - Started appending to the per model log file
[2018-03-31 00:19:22,702] [INFO] [mdt.model_fitting] [run] - Using MDT version 0.10.9
[2018-03-31 00:19:22,702] [INFO] [mdt.model_fitting] [run] - Preparing for model BallStick_r1
[2018-03-31 00:19:22,703] [INFO] [mdt.model_fitting] [run] - Current cascade: ['NODDI (Cascade)', 'BallStick_r1 (Cascade)', 'BallStick_r1']
[2018-03-31 00:19:22,703] [INFO] [mdt.models.composite] [_prepare_input_data] - No model protocol options to apply, using original protocol.
[2018-03-31 00:19:22,704] [INFO] [mdt.utils] [estimate_noise_std] - Trying to estimate a noise std.
[2018-03-31 00:19:23,133] [INFO] [mdt.utils] [estimate] - Found global noise std 223.0515899658203 using estimator AllUnweightedVolumes.
[2018-03-31 00:19:23,137] [INFO] [mdt.model_fitting] [_logging] - Fitting BallStick_r1 model
[2018-03-31 00:19:23,137] [INFO] [mdt.model_fitting] [_logging] - The parameters we will fit are: ['S0.s0', 'w_stick0.w', 'Stick0.theta', 'Stick0.phi']
[2018-03-31 00:19:23,137] [INFO] [mdt.model_fitting] [run] - Saving temporary results in /home/ROBARTS/alik/graham/home/test_mdt_output/100307/BallStick_r1/tmp_results.
[2018-03-31 00:19:25,441] [INFO] [mdt.processing_strategies] [_process_chunk] - Computations are at 0.00%, processing next 100000 voxels (741046 voxels in total, 0 processed). Time spent: 0:00:00:00, time left: ? (d:h:m:s).
[2018-03-31 00:19:31,543] [INFO] [mot.cl_routines.optimizing.base] [minimize] - Entered optimization routine.
[2018-03-31 00:19:31,543] [INFO] [mot.cl_routines.optimizing.base] [minimize] - Using MOT version 0.3.12
[2018-03-31 00:19:31,544] [INFO] [mot.cl_routines.optimizing.base] [minimize] - We will use a single precision float type for the calculations.
[2018-03-31 00:19:31,544] [INFO] [mot.cl_routines.optimizing.base] [minimize] - Using device 'GPU - Quadro K2200 (NVIDIA CUDA)'.
[2018-03-31 00:19:31,544] [INFO] [mot.cl_routines.optimizing.base] [minimize] - Using compile flags: ['-cl-mad-enable', '-cl-single-precision-constant', '-cl-no-signed-zeros', '-cl-denorms-are-zero']
[2018-03-31 00:19:31,544] [INFO] [mot.cl_routines.optimizing.base] [minimize] - We will use the optimizer Powell with optimizer settings {'glimit': 100.0, 'patience': 2, 'reset_method': 'EXTRAPOLATED_POINT', 'bracket_gold': 1.618034}
[2018-03-31 00:19:31,544] [INFO] [mot.cl_routines.optimizing.base] [minimize] - Starting optimization preliminaries
[2018-03-31 00:19:31,592] [INFO] [mot.cl_routines.optimizing.base] [minimize] - Finished optimization preliminaries
[2018-03-31 00:19:31,592] [INFO] [mot.cl_routines.optimizing.base] [minimize] - Starting optimization
Traceback (most recent call last):
File "/usr/bin/mdt-batch-fit", line 9, in <module>
load_entry_point('mdt==0.10.9', 'console_scripts', 'mdt-batch-fit')()
File "/usr/lib/python3/dist-packages/mdt/shell_utils.py", line 44, in console_script
cls().start(sys.argv[1:])
File "/usr/lib/python3/dist-packages/mdt/shell_utils.py", line 63, in start
self.run(args, {})
File "/usr/lib/python3/dist-packages/mdt/cli_scripts/mdt_batch_fit.py", line 140, in run
use_gradient_deviations=args.use_gradient_deviations)
File "/usr/lib/python3/dist-packages/mdt/__init__.py", line 325, in batch_fit
return batch_apply(batch_fit_func, data_folder, batch_profile=batch_profile, subjects_selection=subjects_selection)
File "/usr/lib/python3/dist-packages/mdt/batch_utils.py", line 479, in batch_apply
results[subject.subject_id] = f(subject)
File "/usr/lib/python3/dist-packages/mdt/batch_utils.py", line 477, in f
return func(subject)
File "/usr/lib/python3/dist-packages/mdt/model_fitting.py", line 96, in __call__
model_fit.run()
File "/usr/lib/python3/dist-packages/mdt/model_fitting.py", line 191, in run
_, maps = self._run(self._model, self._recalculate, self._only_recalculate_last)
File "/usr/lib/python3/dist-packages/mdt/model_fitting.py", line 228, in _run
_in_recursion=new_in_recursion)
File "/usr/lib/python3/dist-packages/mdt/model_fitting.py", line 228, in _run
_in_recursion=new_in_recursion)
File "/usr/lib/python3/dist-packages/mdt/model_fitting.py", line 237, in _run
apply_user_provided_initialization=not _in_recursion)
File "/usr/lib/python3/dist-packages/mdt/model_fitting.py", line 254, in _run_composite_model
results = fitter.run()
File "/usr/lib/python3/dist-packages/mdt/model_fitting.py", line 338, in run
results = processing_strategy.process(worker)
File "/usr/lib/python3/dist-packages/mdt/processing_strategies.py", line 68, in process
self._process_chunk(processor, chunks)
File "/usr/lib/python3/dist-packages/mdt/processing_strategies.py", line 113, in _process_chunk
process()
File "/usr/lib/python3/dist-packages/mdt/processing_strategies.py", line 110, in process
processor.process(chunk, next_indices=next_chunk)
File "/usr/lib/python3/dist-packages/mdt/processing_strategies.py", line 284, in process
self._process(roi_indices, next_indices=next_indices)
File "/usr/lib/python3/dist-packages/mdt/processing_strategies.py", line 437, in _process
optimization_results = self._optimizer.minimize(model)
File "/usr/lib/python3/dist-packages/mot/cl_routines/optimizing/base.py", line 185, in minimize
self.load_balancer.process(workers, model.get_nmr_problems())
File "/usr/lib/python3/dist-packages/mot/load_balance_strategies.py", line 408, in process
single_batch_length=single_batch_length)
File "/usr/lib/python3/dist-packages/mot/load_balance_strategies.py", line 332, in process
self._run_batches(workers, batches)
File "/usr/lib/python3/dist-packages/mot/load_balance_strategies.py", line 265, in _run_batches
queue.finish()
pyopencl.LogicError: clFinish failed: invalid command queue
Thanks for the quick reply to the last (non)-issue!
This one is stumping me a bit more -- I am running MDT on some 7T data where we have performed gradient non-linearity correction, and created a gradient deviations file in the same format as HCP data (9-dim in dim4). I have been using these datasets (with gradient deviations) in FSL dtifit/bedpost with no issues, but running into an error when trying it on MDT.
I've shared the dataset here as well (250MB tar): https://www.dropbox.com/s/veqn9vsi5fhtyas/test_dwi_grad_dev.tar?dl=0
It runs fine without the --gradient-deviation option.
But if I run the following commands,
mdt-create-protocol sub-CT01_dwi_space-T1wGC_preproc.bvec sub-CT01_dwi_space-T1wGC_preproc.bval -o sub-CT01_dwi_space-T1wGC_preproc.prtcl
mdt-model-fit NODDI sub-CT01_dwi_space-T1wGC_preproc.nii.gz sub-CT01_dwi_space-T1wGC_preproc.prtcl sub-CT01_dwi_space-T1wGC_brainmask.nii.gz --gradient-deviations sub-CT01_dwi_space-T1wGC_preproc.grad_dev.nii.gz
I get this output:
[2019-04-01 13:33:57,755] [INFO] [mdt.lib.model_fitting] [get_model_fit] - Starting intermediate optimization for generating initialization point.
[2019-04-01 13:33:57,960] [INFO] [mdt.lib.model_fitting] [_apply_user_provided_initialization_data] - Preparing model BallStick_r1 with the user provided initialization data.
[2019-04-01 13:33:57,967] [INFO] [mdt.lib.model_fitting] [fit_composite_model] - Using MDT version 0.20.3
[2019-04-01 13:33:57,967] [INFO] [mdt.lib.model_fitting] [fit_composite_model] - Preparing for model BallStick_r1
[2019-04-01 13:33:57,968] [INFO] [mdt.lib.model_fitting] [fit_composite_model] - Current cascade: ['BallStick_r1']
[2019-04-01 13:33:58,991] [INFO] [mdt.models.composite] [_prepare_input_data] - No volume options to apply, using all 103 volumes.
[2019-04-01 13:33:59,041] [INFO] [mdt.models.composite] [set_input_data] - Using the gradient deviations in the model optimization.
[2019-04-01 13:33:59,041] [INFO] [mdt.utils] [estimate_noise_std] - Trying to estimate a noise std.
[2019-04-01 13:33:59,107] [INFO] [mdt.utils] [estimate_noise_std] - Estimated global noise std 39.15896224975586.
[2019-04-01 13:33:59,108] [INFO] [mdt.lib.model_fitting] [_model_fit_logging] - Fitting BallStick_r1 model
[2019-04-01 13:33:59,108] [INFO] [mdt.lib.model_fitting] [_model_fit_logging] - The 4 parameters we will fit are: ['S0.s0', 'w_stick0.w', 'Stick0.theta', 'Stick0.phi']
[2019-04-01 13:33:59,108] [INFO] [mdt.lib.model_fitting] [fit_composite_model] - Saving temporary results in /scratch/akhanf/test_7T_mdt_grad_dev_enabled/sub-CT01/BallStick_r1/tmp_results.
[2019-04-01 13:33:59,139] [INFO] [mdt.lib.processing_strategies] [_process_chunk] - Computations are at 0.00%, processing next 100000 voxels (309970 voxels in total, 0 processed). Time spent: 0:00:00:00, time left: ? (d:h:m:s).
[2019-04-01 13:33:59,168] [INFO] [mdt.lib.processing_strategies] [_process] - Starting optimization
[2019-04-01 13:33:59,168] [INFO] [mdt.lib.processing_strategies] [_process] - Using MOT version 0.9.1
[2019-04-01 13:33:59,168] [INFO] [mdt.lib.processing_strategies] [_process] - We will use a single precision float type for the calculations.
[2019-04-01 13:33:59,168] [INFO] [mdt.lib.processing_strategies] [_process] - Using device 'CPU - Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz (Intel(R) OpenCL)'.
[2019-04-01 13:33:59,169] [INFO] [mdt.lib.processing_strategies] [_process] - Using compile flags: ['-cl-denorms-are-zero', '-cl-mad-enable', '-cl-no-signed-zeros']
[2019-04-01 13:33:59,169] [INFO] [mdt.lib.processing_strategies] [_process] - We will use the optimizer Powell with default settings.
[2019-04-01 13:34:17,811] [INFO] [mdt.lib.processing_strategies] [_process] - Finished optimization
[2019-04-01 13:34:17,813] [INFO] [mdt.lib.processing_strategies] [_process] - Starting post-processing
2 errors generated.
/usr/lib/python3/dist-packages/pyopencl/__init__.py:63: CompilerWarning: Non-empty compiler output encountered. Set the environment variable PYOPENCL_COMPILER_OUTPUT=1 to see more.
"to see more.", CompilerWarning)
Traceback (most recent call last):
File "/usr/bin/mdt-model-fit", line 9, in <module>
load_entry_point('mdt==0.20.3', 'console_scripts', 'mdt-model-fit')()
File "/usr/lib/python3/dist-packages/mdt/lib/shell_utils.py", line 47, in console_script
cls().start(sys.argv[1:])
File "/usr/lib/python3/dist-packages/mdt/lib/shell_utils.py", line 66, in start
self.run(args, {})
File "/usr/lib/python3/dist-packages/mdt/cli_scripts/mdt_model_fit.py", line 168, in run
fit_model()
File "/usr/lib/python3/dist-packages/mdt/cli_scripts/mdt_model_fit.py", line 162, in fit_model
use_cascaded_inits=args.use_cascaded_inits)
File "/usr/lib/python3/dist-packages/mdt/__init__.py", line 197, in fit_model
inits = get_optimization_inits(model_name, input_data, output_folder, cl_device_ind=cl_device_ind)
File "/usr/lib/python3/dist-packages/mdt/__init__.py", line 89, in get_optimization_inits
return get_optimization_inits(model_name, input_data, output_folder, cl_device_ind=cl_device_ind)
File "/usr/lib/python3/dist-packages/mdt/lib/model_fitting.py", line 162, in get_optimization_inits
return get_init_data(model_name)
File "/usr/lib/python3/dist-packages/mdt/lib/model_fitting.py", line 94, in get_init_data
fit_results = get_model_fit('BallStick_r1')
File "/usr/lib/python3/dist-packages/mdt/lib/model_fitting.py", line 70, in get_model_fit
initialization_data={'inits': get_init_data(model_name)}).run()
File "/usr/lib/python3/dist-packages/mdt/lib/model_fitting.py", line 336, in run
_, maps = self._run(self._model, self._recalculate, self._only_recalculate_last)
File "/usr/lib/python3/dist-packages/mdt/lib/model_fitting.py", line 382, in _run
apply_user_provided_initialization=not _in_recursion)
File "/usr/lib/python3/dist-packages/mdt/lib/model_fitting.py", line 393, in _run_composite_model
optimizer_options=self._optimizer_options)
File "/usr/lib/python3/dist-packages/mdt/lib/model_fitting.py", line 465, in fit_composite_model
return processing_strategy.process(worker)
File "/usr/lib/python3/dist-packages/mdt/lib/processing_strategies.py", line 75, in process
self._process_chunk(processor, chunks)
File "/usr/lib/python3/dist-packages/mdt/lib/processing_strategies.py", line 120, in _process_chunk
process()
File "/usr/lib/python3/dist-packages/mdt/lib/processing_strategies.py", line 117, in process
processor.process(chunk, next_indices=next_chunk)
File "/usr/lib/python3/dist-packages/mdt/lib/processing_strategies.py", line 293, in process
self._process(roi_indices, next_indices=next_indices)
File "/usr/lib/python3/dist-packages/mdt/lib/processing_strategies.py", line 490, in _process
results = self._model.get_post_optimization_output(x_final, results['status'])
File "/usr/lib/python3/dist-packages/mdt/models/composite.py", line 257, in get_post_optimization_output
volume_maps = self.post_process_optimization_maps(volume_maps, results_array=optimized_parameters)
File "/usr/lib/python3/dist-packages/mdt/models/composite.py", line 348, in post_process_optimization_maps
fim = self._compute_fisher_information_matrix(results_array)
File "/usr/lib/python3/dist-packages/mdt/models/composite.py", line 589, in _compute_fisher_information_matrix
cl_runtime_info=CLRuntimeInfo(double_precision=True)
File "/usr/lib/python3/dist-packages/mot/cl_routines/numerical_differentiation.py", line 133, in estimate_hessian
hessian_kernel.evaluate(kernel_data, nmr_voxels, use_local_reduction=True, cl_runtime_info=cl_runtime_info)
File "/usr/lib/python3/dist-packages/mot/lib/cl_function.py", line 248, in evaluate
use_local_reduction=use_local_reduction, cl_runtime_info=cl_runtime_info)
File "/usr/lib/python3/dist-packages/mot/lib/cl_function.py", line 608, in apply_cl_function
cl_function, kernel_data, cl_runtime_info.double_precision, use_local_reduction)
File "/usr/lib/python3/dist-packages/mot/lib/cl_function.py", line 653, in __init__
self._kernel = self._build_kernel(self._get_kernel_source(), compile_flags)
File "/usr/lib/python3/dist-packages/mot/lib/cl_function.py", line 712, in _build_kernel
return cl.Program(self._cl_context, kernel_source).build(' '.join(compile_flags))
File "/usr/lib/python3/dist-packages/pyopencl/__init__.py", line 213, in build
options=options, source=self._source)
File "/usr/lib/python3/dist-packages/pyopencl/__init__.py", line 253, in _build_and_catch_errors
raise err
pyopencl.RuntimeError: clBuildProgram failed: build program failure -
Build on <pyopencl.Device 'Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz' on 'Intel(R) OpenCL' at 0x2751f48>:
Compilation started
6:529:35: error: scalar operand type has greater rank than the type of the vector element. ('mot_float_type' (aka 'double') and 'float4' (vector of 4 'float' values))
6:529:35: error: can't convert between vector values of different size ('float4' (vector of 4 'float' values) and 'mot_float_type' (aka 'double'))
Compilation failed
(options: -cl-denorms-are-zero -cl-mad-enable -cl-no-signed-zeros -I /usr/lib/python3/dist-packages/pyopencl/cl)
(source saved as /tmp/tmp_koj1nk4.cl)
I am getting the following installation error
$ sudo apt-get install python3-mdt
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
python3-mdt : Depends: python3-mot but it is not installable
E: Unable to correct problems, you have held broken packages.
Dear Harm,
I fitted the tensor model on my DWI data and the resulting MD values are rather low, in the order of 1e-9. I wanted to ask whether these low values are normal. In your NeuroImage (2017) paper, I see that the diffusion is expressed in seconds per meter, should I also interpret the MD values at this scale?
Thank you, best wishes,
Sanne
Dear Robbert,
I've fitted the NODDI model and I'm uncertain about the interpretation of the various output maps. I'm specifically looking for the correspondence between the outputs w_ic.w, w_ec.w & w_csf.w and the NODDI formula from the Zhang 2012 paper*, where you also refer to in your NeuroImage (2017) paper.
I'm quoting the formula from the Zhang paper below (unfortunately I can not use subscript):
A = (1 - Viso) (Vic Aic + (1-Vic)Aec) + Viso Aiso
They further describe Vic as representing: neurite density / volume fraction of intra cellular compartment
I assumed that the w_ic.w from the MDT corresponds to Vic, and thus representing the neurite density. However given that w_ic.w+w_ec.w+w_csf.w = 1, I'm not so sure anymore.
So I'm wondering where do the three maps: w_ic.w, w_ec.w & w_csf.w correspond to in the Zhang formula, and how should I interpret the three maps respectively.
Also, in your NeuroImage paper it is written that the Orientation Dispersion Index (ODI) is defined in the Zhang 2011** paper. I failed to find the corresponding formula and given the output-values of the ODI-maps, I was wondering whether it could be the Zhang 2012 paper?
Thanks a lot for your answers (again) and best of wishes, Sanne
** Zhang, H., Hubbard, P.L., Parker, G.J., Alexander, D.C., 2011. Axon diameter mapping in
the presence of orientation dispersion with diffusion MRI. Neuroimage 56 (3),
1301–1315.
Hi,
I'm processing a fairly large dataset of DWI images, so the GPU acceleration that MDT offers is very desirable compared to the Matlab toolbox. Before going ahead with everything though, I'm trying to get results between the two in mathematically-complete agreement so I can be confident what I'm doing is exactly what I'm intending to!
I am using the current version of the NODDI Matlab toolbox, with all default settings. With this I've processed a whole-brain DWI scan.
On the MDT side of things, have run the current MDT version with the "NODDI" model on the same data. Checking between ODI outputs, initially the results were similar though somewhat different in areas of brain tissue, and very markedly different in areas of CSF where MDT appeared to almost-zero those voxels while the Matlab toolbox assigned very high values. I came across this previous thread on github seeming to cover the same issue - https://github.com/robbert-harms/MDT/issues?q=is%3Aissue+is%3Aclosed
I followed that advice, setting the noise stdev value to "1", and reran the model. In brain tissue the results are now much more similar, to the point where you might not notice they're different to look at, but on inspection the values still differ. However values are still marginally divergent. e.g. picking a random voxel from the corpus callosum, MDT gives 0.562... while Matlab gives 0.566...
I realise this doesn't necessarily invalidate any of the data, but it would be reassuring to achieve an exact match before proceeding with the main analysis. If any advice could be offered on further settings etc. that could be adjusting to bring it all in line that would be very appreciated!
Thank you very much in advance,
Iain
Dear all,
I have been trying to use mdt_model_fit as follows on my data:
mdt-model-fit NODDI dwi.nii.gz protocol.prtcl dwi_brainmask.nii.gz
I get "Segmentation fault (core dumped)" after a while. I was wondering if you can help me find out what went wrong.
Here is the log and output folder:
└── BallStick_r1
├── info.log
└── tmp_results
└── processing_tmp
└── roi_voxel_lookup_table.npy
Log:
[2020-04-19 02:22:10,805] [INFO] [mdt.lib.processing.model_fitting] [get_model_fit] - Starting intermediate optimization for generating initialization point.
[2020-04-19 02:22:10,945] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Using MDT version 1.2.3
[2020-04-19 02:22:10,945] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Preparing for model BallStick_r1
[2020-04-19 02:22:11,525] [INFO] [mdt.models.composite] [_prepare_input_data] - No volume options to apply, using all 300 volumes.
[2020-04-19 02:22:11,525] [INFO] [mdt.utils] [estimate_noise_std] - Trying to estimate a noise std.
[2020-04-19 02:22:11,641] [INFO] [mdt.utils] [estimate_noise_std] - Estimated global noise std 4.953189373016357.
[2020-04-19 02:22:11,641] [INFO] [mdt.lib.processing.model_fitting] [_model_fit_logging] - Fitting BallStick_r1 model
[2020-04-19 02:22:11,641] [INFO] [mdt.lib.processing.model_fitting] [_model_fit_logging] - The 4 parameters we will fit are: ['S0.s0', 'w_stick0.w', 'Stick0.theta', 'Stick0.phi']
[2020-04-19 02:22:11,641] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Saving temporary results in output/sub-CC00060XX03_ses-12501_desc-preproc_space-dwi_brainmask/BallStick_r1/tmp_results.
[2020-04-19 02:22:11,829] [INFO] [mdt.lib.processing.processing_strategies] [_process_chunk] - Computations are at 0.00%, processing next 100000 voxels (215497 voxels in total, 0 processed). Time spent: 0:00:00:00, time left: ? (d:h:m:s).
[2020-04-19 02:22:11,829] [INFO] [mdt.lib.processing.model_fitting] [_process] - Starting optimization
[2020-04-19 02:22:11,830] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using MOT version 0.11.2
[2020-04-19 02:22:11,830] [INFO] [mdt.lib.processing.model_fitting] [_process] - We will use a single precision float type for the calculations.
[2020-04-19 02:22:11,830] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using device 'CPU - pthread-Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz (Portable Computing Language)'.
[2020-04-19 02:22:11,830] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using compile flags: ('-cl-denorms-are-zero', '-cl-mad-enable', '-cl-no-signed-zeros')
[2020-04-19 02:22:11,830] [INFO] [mdt.lib.processing.model_fitting] [_process] - We will use the optimizer Powell with default settings.
[2020-04-19 05:14:02,626] [INFO] [mdt.lib.processing.model_fitting] [_process] - Finished optimization
[2020-04-19 05:14:02,626] [INFO] [mdt.lib.processing.model_fitting] [_process] - Starting post-processing
Segmentation fault (core dumped)
Hello,
I am running MDT out of a Singularity container and am getting a runtime error when running the mdt-model-fit
command. Is there anyway to figure out what the problem is?
Singularity> mdt-create-protocol 119576/V12-CVD/*.bvec 119576/V12-CVD/*.bval
Singularity> mdt-model-fit NODDI 119576/V12-CVD/sub-119576_ses-V12-CVD_dwi.nii.gz 119576/V12-CVD/sub-119576_ses-V12-CVD_dwi.prtcl 119576/V12-CVD/*brainmask.nii.gz
[2023-09-05 12:14:06,835] [INFO] [mdt.lib.processing.model_fitting] [get_model_fit] - Starting intermediate optimization for generating initialization point.
[2023-09-05 12:14:06,952] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Using MDT version 1.2.6
[2023-09-05 12:14:06,953] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Preparing for model BallStick_r1
[2023-09-05 12:14:07,390] [INFO] [mdt.models.composite] [_prepare_input_data] - No volume options to apply, using all 197 volumes.
[2023-09-05 12:14:07,390] [INFO] [mdt.utils] [estimate_noise_std] - Trying to estimate a noise std.
[2023-09-05 12:14:07,484] [INFO] [mdt.utils] [estimate_noise_std] - Estimated global noise std 267.0830932932669.
[2023-09-05 12:14:07,485] [INFO] [mdt.lib.processing.model_fitting] [_model_fit_logging] - Fitting BallStick_r1 model
[2023-09-05 12:14:07,485] [INFO] [mdt.lib.processing.model_fitting] [_model_fit_logging] - The 4 parameters we will fit are: ['S0.s0', 'w_stick0.w', 'Stick0.theta', 'Stick0.phi']
[2023-09-05 12:14:07,485] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Saving temporary results in 119576/V12-CVD/output/sub-119576_ses-V12-CVD_brainmask/BallStick_r1/tmp_results.
[2023-09-05 12:14:07,665] [INFO] [mdt.lib.processing.processing_strategies] [_process_chunk] - Computations are at 0.00%, processing next 100000 voxels (334452 voxels in total, 0 processed). Time spent: 0:00:00:00, time left: ? (d:h:m:s).
[2023-09-05 12:14:07,666] [INFO] [mdt.lib.processing.model_fitting] [_process] - Starting optimization
[2023-09-05 12:14:07,666] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using MOT version 0.11.3
[2023-09-05 12:14:07,666] [INFO] [mdt.lib.processing.model_fitting] [_process] - We will use a single precision float type for the calculations.
[2023-09-05 12:14:07,666] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using device 'GPU - Tesla V100-SXM2-16GB (NVIDIA CUDA)'.
[2023-09-05 12:14:07,666] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using compile flags: ('-cl-denorms-are-zero', '-cl-mad-enable', '-cl-no-signed-zeros')
[2023-09-05 12:14:07,666] [INFO] [mdt.lib.processing.model_fitting] [_process] - We will use the optimizer Powell with default settings.
Traceback (most recent call last):
File "/usr/bin/mdt-model-fit", line 11, in <module>
load_entry_point('mdt==1.2.6', 'console_scripts', 'mdt-model-fit')()
File "/usr/lib/python3/dist-packages/mdt/lib/shell_utils.py", line 47, in console_script
cls().start(sys.argv[1:])
File "/usr/lib/python3/dist-packages/mdt/lib/shell_utils.py", line 66, in start
self.run(args, {})
File "/usr/lib/python3/dist-packages/mdt/cli_scripts/mdt_model_fit.py", line 161, in run
fit_model()
File "/usr/lib/python3/dist-packages/mdt/cli_scripts/mdt_model_fit.py", line 155, in fit_model
use_cascaded_inits=args.use_cascaded_inits)
File "/usr/lib/python3/dist-packages/mdt/__init__.py", line 191, in fit_model
double_precision=double_precision)
File "/usr/lib/python3/dist-packages/mdt/__init__.py", line 99, in get_optimization_inits
double_precision=double_precision)
File "/usr/lib/python3/dist-packages/mdt/lib/processing/model_fitting.py", line 177, in get_optimization_inits
return get_init_data(model_name)
File "/usr/lib/python3/dist-packages/mdt/lib/processing/model_fitting.py", line 115, in get_init_data
fit_results = get_model_fit('BallStick_r1')
File "/usr/lib/python3/dist-packages/mdt/lib/processing/model_fitting.py", line 86, in get_model_fit
cl_device_ind=cl_device_ind, initialization_data={'inits': inits})
File "/usr/lib/python3/dist-packages/mdt/__init__.py", line 206, in fit_model
optimizer_options=optimizer_options)
File "/usr/lib/python3/dist-packages/mdt/lib/processing/model_fitting.py", line 306, in fit_composite_model
return processing_strategy.process(worker)
File "/usr/lib/python3/dist-packages/mdt/lib/processing/processing_strategies.py", line 106, in process
self._process_chunk(processor, chunks)
File "/usr/lib/python3/dist-packages/mdt/lib/processing/processing_strategies.py", line 151, in _process_chunk
process()
File "/usr/lib/python3/dist-packages/mdt/lib/processing/processing_strategies.py", line 148, in process
processor.process(chunk, next_indices=next_chunk)
File "/usr/lib/python3/dist-packages/mdt/lib/processing/processing_strategies.py", line 275, in process
self._process(roi_indices, next_indices=next_indices)
File "/usr/lib/python3/dist-packages/mdt/lib/processing/model_fitting.py", line 371, in _process
x0 = self._codec.encode(self._initial_params[roi_indices], kernel_data_subset)
File "/usr/lib/python3/dist-packages/mdt/model_building/utils.py", line 220, in encode
parameters, kernel_data, cl_runtime_info=cl_runtime_info)
File "/usr/lib/python3/dist-packages/mdt/model_building/utils.py", line 257, in _transform_parameters
cl_named_func.evaluate(kernel_data, parameters.shape[0], cl_runtime_info=cl_runtime_info)
File "/usr/lib/python3/dist-packages/mot/lib/cl_function.py", line 356, in evaluate
kernels = get_kernels(kernel_source, cl_function.get_cl_function_name())
File "/usr/lib/python3/dist-packages/mot/lib/cl_function.py", line 350, in get_kernels
env.context, kernel_source).build(' '.join(cl_runtime_info.compile_flags))
File "/usr/lib/python3/dist-packages/pyopencl/__init__.py", line 462, in build
options_bytes=options_bytes, source=self._source)
File "/usr/lib/python3/dist-packages/pyopencl/__init__.py", line 506, in _build_and_catch_errors
raise err
pyopencl.cffi_cl.RuntimeError: clBuildProgram failed: BUILD_PROGRAM_FAILURE -
Build on <pyopencl.Device 'Tesla V100-SXM2-16GB' on 'NVIDIA CUDA' at 0x1f815a0>:
(options: -cl-denorms-are-zero -cl-mad-enable -cl-no-signed-zeros -I /usr/lib/python3/dist-packages/pyopencl/cl)
(source saved as /tmp/tmpqg6b71u6.cl)
Thanks!
Hi!
Using MDT 0.10.7, I can't seem to get the OpenCL context creation work.
mdt-list-devices
fails with the following stack:
$ mdt-list-devices
Traceback (most recent call last):
File "/share/software/user/open/py-mdt/0.10.7_py36/bin/mdt-list-devices", line 11, in <module>
sys.exit(ListDevices.console_script())
File "/share/software/user/open/py-mdt/0.10.7_py36/lib/python3.6/site-packages/mdt/shell_utils.py", line 44, in console_script
cls().start(sys.argv[1:])
File "/share/software/user/open/py-mdt/0.10.7_py36/lib/python3.6/site-packages/mdt/shell_utils.py", line 63, in start
self.run(args, {})
File "/share/software/user/open/py-mdt/0.10.7_py36/lib/python3.6/site-packages/mdt/cli_scripts/mdt_list_devices.py", line 28, in run
for ind, env in enumerate(cl_environments.CLEnvironmentFactory.smart_device_selection()):
File "/share/software/user/open/py-mdt/0.10.7_py36/lib/python3.6/site-packages/mot/cl_environments.py", line 255, in smart_device_selection
cl_environments = CLEnvironmentFactory.all_devices()
File "/share/software/user/open/py-mdt/0.10.7_py36/lib/python3.6/site-packages/mot/cl_environments.py", line 235, in all_devices
env = CLEnvironment(platform, device)
File "/share/software/user/open/py-mdt/0.10.7_py36/lib/python3.6/site-packages/mot/cl_environments.py", line 23, in __init__
self._cl_context = CLRunContext(self)
File "/share/software/user/open/py-mdt/0.10.7_py36/lib/python3.6/site-packages/mot/cl_environments.py", line 151, in __init__
self.context = cl.Context([cl_environment.device])
File "/share/software/user/open/py-mdt/0.10.7_py36/lib/python3.6/site-packages/pyopencl/cffi_cl.py", line 811, in __init__
num_devices, _devices))
File "/share/software/user/open/py-mdt/0.10.7_py36/lib/python3.6/site-packages/pyopencl/cffi_cl.py", line 664, in _handle_error
raise e
pyopencl.cffi_cl.LogicError: clCreateContext failed: INVALID_DEVICE
This is with CUDA 9.1 and a Tesla P100 GPU:
$ nvidia-smi -L
GPU 0: Tesla P100-SXM2-16GB (UUID: GPU-[...])
OpenCL programs work properly in the same context, including the PyOpenCL benchmark:
$ python3 benchmark.py
Execution time of test without OpenCL: 0.10260295867919922 s
===============================================================
Platform name: NVIDIA CUDA
Platform profile: FULL_PROFILE
Platform vendor: NVIDIA Corporation
Platform version: OpenCL 1.2 CUDA 9.1.84
---------------------------------------------------------------
Device name: Tesla P100-SXM2-16GB
Device type: GPU
Device memory: 16280 MB
Device max clock speed: 1480 MHz
Device compute units: 56
Device max work group size: 1024
Device max work item sizes: [1024, 1024, 64]
Data points: 8388608
Workers: 256
Preferred work group size multiple: 32
Execution time of test: 0.000208416 s
Results OK
Do you have any idea what the problem may be?
Thanks!
Dear Robbert
I'm trying to fix some maps in the initialization of a cascade fit process, anyway, the final maps do not exactly match the ones I selected.
My code follows
mdt.fit_model(MODEL,
input_data,
OUTPUT_dir,
method=METHOD,
initialization_data={
'inits': {
'GDRCylinders.phi': phi_data,
'GDRCylinders.theta': theta_data},
'fixes':{
'w_hin.w': AxCaliber_results['w_hin.w'], ---> running this code I obtain a MODEL[w_hin.w] not equal to AxCaliber_results['w_hin.w']
'GDRCylinders.R': AxCaliberPlusM_results_NI['GDRCylinders.R']} ---> running this code I obtain a MODEL[GDRCylinders.R] not equal to AxCaliber_results[GDRCylinders.R]
})
Is there any other way to precisely fix some maps in the fit?
Thanks a lot,
Best
Stefania
Dear Robbert
I would like to fit a diffusion signal from a rat with the CHARMED_r2 model. Unfortunately, it always appears the
"RuntimeError: clEnqueueNDRangeKernel failed: OUT_OF_RESOURCES", regardless of the number of processed voxels (10000, 5000, 500, 10, 1). My GPU is A6000 ...
Thanks for any suggestion
Best
Stefania Oliviero
Dear,
I'm trying to run post-processing using mdt.sample_model(). I have ran it successfully at Mac but now I try to run it on Linux, it gives me the following error.
Do you have an idea what I should do?
Thank you!
samples=mdt.sample_model('NODDI',input_data,'output',nmr_samples=500,burnin=0,thinning=0,initialization_data={'inits':mle})
File "/usr/lib/python3/dist-packages/mdt/init.py", line 321, in sample_model
sampler_options=sampler_options)
File "/usr/lib/python3/dist-packages/mdt/lib/model_sampling.py", line 102, in sample_composite_model
return processing_strategy.process(worker)
File "/usr/lib/python3/dist-packages/mdt/lib/processing_strategies.py", line 75, in process
self._process_chunk(processor, chunks)
File "/usr/lib/python3/dist-packages/mdt/lib/processing_strategies.py", line 120, in _process_chunk
process()
File "/usr/lib/python3/dist-packages/mdt/lib/processing_strategies.py", line 117, in process
processor.process(chunk, next_indices=next_chunk)
File "/usr/lib/python3/dist-packages/mdt/lib/processing_strategies.py", line 293, in process
self._process(roi_indices, next_indices=next_indices)
File "/usr/lib/python3/dist-packages/mdt/lib/processing_strategies.py", line 606, in _process
self._write_output_recursive(maps_to_save, roi_indices)
File "/usr/lib/python3/dist-packages/mdt/lib/processing_strategies.py", line 643, in _write_output_recursive
self._write_output_recursive(value, roi_indices, os.path.join(sub_dir, key))
File "/usr/lib/python3/dist-packages/mdt/lib/processing_strategies.py", line 647, in _write_output_recursive
self._write_volumes(current_output, roi_indices, os.path.join(self._tmp_storage_dir, sub_dir))
File "/usr/lib/python3/dist-packages/mdt/lib/processing_strategies.py", line 342, in _write_volumes
self._write_volume(result_array, volume_indices, filename)
File "/usr/lib/python3/dist-packages/mdt/lib/processing_strategies.py", line 367, in _write_volume
shape=self._mask.shape[0:3] + extra_dims)
File "/usr/lib/python3/dist-packages/numpy/lib/format.py", line 792, in open_memmap
mode=mode, offset=offset)
File "/usr/lib/python3/dist-packages/numpy/core/memmap.py", line 221, in new
fid = open(filename, (mode == 'c' and 'r' or mode)+'b')
OSError: [Errno 95] Operation not supported: 'output/NODDI/samples/tmp_results/model_defined_maps/NODDI_IC.vec0.npy'
We are running mdt-model-fit:
mdt-model-fit \
--cl-device-ind 0 -o "output1/powell_double" \
-n "1" \
--double\
--no-recalculate\
"NODDI" \
"data.nii" \
"bvecs.prtcl" \
"nodif_brain_mask.nii"
the output of the command hang forever after following output:
--------------------------------------------------------------
-------------------------Watson double ------------------------
--------------------------------------------------------------
[2020-03-07 16:20:02,631] [INFO] [mdt.lib.processing.model_fitting] [get_model_fit] - Starting intermediate optimization for generating initialization point.
[2020-03-07 16:20:02,778] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Using MDT version 1.2.2
[2020-03-07 16:20:02,779] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Preparing for model BallStick_r1
[2020-03-07 16:20:02,995] [INFO] [mdt.models.composite] [_prepare_input_data] - No volume options to apply, using all 104 volumes.
[2020-03-07 16:20:02,996] [INFO] [mdt.lib.processing.model_fitting] [_model_fit_logging] - Fitting BallStick_r1 model
[2020-03-07 16:20:02,996] [INFO] [mdt.lib.processing.model_fitting] [_model_fit_logging] - The 4 parameters we will fit are: ['S0.s0', 'w_stick0.w', 'Stick0.theta', 'Stick0.phi']
[2020-03-07 16:20:02,997] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Saving temporary results in output1/powell_double/BallStick_r1/tmp_results.
[2020-03-07 16:20:03,286] [INFO] [mdt.lib.processing.processing_strategies] [_process_chunk] - Computations are at 0.00%, processing next 100000 voxels (149524 voxels in total, 0 processed). Time spent: 0:00:00:00, time left: ? (d:h:m:s).
[2020-03-07 16:20:03,287] [INFO] [mdt.lib.processing.model_fitting] [_process] - Starting optimization
[2020-03-07 16:20:03,288] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using MOT version 0.11.1
[2020-03-07 16:20:03,288] [INFO] [mdt.lib.processing.model_fitting] [_process] - We will use a double precision float type for the calculations.
[2020-03-07 16:20:03,289] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using device 'GPU - TITAN RTX (NVIDIA CUDA)'.
[2020-03-07 16:20:03,289] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using compile flags: ('-cl-denorms-are-zero', '-cl-mad-enable', '-cl-no-signed-zeros')
[2020-03-07 16:20:03,289] [INFO] [mdt.lib.processing.model_fitting] [_process] - We will use the optimizer Powell with default settings.
[2020-03-07 16:20:17,327] [INFO] [mdt.lib.processing.model_fitting] [_process] - Finished optimization
[2020-03-07 16:20:17,328] [INFO] [mdt.lib.processing.model_fitting] [_process] - Starting post-processing
The command is launched from inside the docker container created using the pull request #20
We have a TITAN RTX GPU correctly configured on a Ubuntu 18.04 machine:
$ nvidia-smi
Sat Mar 7 16:41:19 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.64 Driver Version: 440.64 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 TITAN RTX Off | 00000000:3B:00.0 Off | N/A |
| 41% 34C P8 13W / 280W | 112MiB / 24212MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 2004 G /usr/lib/xorg/Xorg 39MiB |
| 0 2120 G /usr/bin/gnome-shell 70MiB |
+-----------------------------------------------------------------------------+
We use Docker 19.03.7
$ docker version
Client: Docker Engine - Community
Version: 19.03.7
API version: 1.40
Go version: go1.12.17
Git commit: 7141c199a2
Built: Wed Mar 4 01:22:36 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.7
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: 7141c199a2
Built: Wed Mar 4 01:21:08 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
Docker is correctly configured to support GPU:
$ docker run --gpus all nvidia/cuda nvidia-smi
Sat Mar 7 15:46:14 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.64 Driver Version: 440.64 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 TITAN RTX Off | 00000000:3B:00.0 Off | N/A |
| 41% 34C P8 13W / 280W | 112MiB / 24212MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
We built the image with the command:
docker build -t mdt -f containers/Dockerfile.intel .
but when we run:
$ docker run --gpus all mdtold mdt-list-devices
we obtain
Traceback (most recent call last):
File "/usr/bin/mdt-list-devices", line 9, in <module>
load_entry_point('mdt==1.2.2', 'console_scripts', 'mdt-list-devices')()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 542, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2569, in load_entry_point
return ep.load()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2229, in load
return self.resolve()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2235, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/lib/python3/dist-packages/mdt/__init__.py", line 8, in <module>
import mot
File "/usr/lib/python3/dist-packages/mot/__init__.py", line 3, in <module>
from .optimize import minimize, get_minimizer_options
File "/usr/lib/python3/dist-packages/mot/optimize/__init__.py", line 1, in <module>
from mot.lib.cl_function import SimpleCLFunction
File "/usr/lib/python3/dist-packages/mot/lib/cl_function.py", line 6, in <module>
from mot.configuration import CLRuntimeInfo
File "/usr/lib/python3/dist-packages/mot/configuration.py", line 20, in <module>
from .lib.cl_environments import CLEnvironmentFactory
File "/usr/lib/python3/dist-packages/mot/lib/cl_environments.py", line 177, in <module>
_cl_environment_cache = _initialize_cl_environment_cache()
File "/usr/lib/python3/dist-packages/mot/lib/cl_environments.py", line 166, in _initialize_cl_environment_cache
context = cl.Context(devices)
pyopencl.RuntimeError: Context failed: device not available
But when you run mdt-list-devices
Hi all,
I need to execute a Levember Marquardt fit on an acquisition with more than 1000 spatial directions and I verified that this calls for more than 64 GB of VRAM - GPU. I would like to know if MDT could work on more nodes of GPUs.
Hello,
When attempting to run through the example data set provided using the GUI, the toolbox aborts. I'm not sure if this is something to do with MDT, with Python, or possibly with an internal issue at the institute where I work.
At the beginning, when selecting the input folders, a 'setNativeLocks failed: Resource temporarily unavailable' is returned. Then when I run the modeling, it returns 'Unhandled Python exception
Aborted'
Any help would be greatly appreciated!
inn: ~ $ MDT
setNativeLocks failed: Resource temporarily unavailable
setNativeLocks failed: Resource temporarily unavailable
setNativeLocks failed: Resource temporarily unavailable
setNativeLocks failed: Resource temporarily unavailable
setNativeLocks failed: Resource temporarily unavailable
setNativeLocks failed: Resource temporarily unavailable
setNativeLocks failed: Resource temporarily unavailable
setNativeLocks failed: Resource temporarily unavailable
[2017-05-15 15:45:44,060] [INFO] [mdt.utils] [configure_per_model_logging] - Started appending to the per model log file
[2017-05-15 15:45:44,060] [INFO] [mdt.model_fitting] [_run_composite_model] - Using MDT version 0.9.31
[2017-05-15 15:45:44,060] [INFO] [mdt.model_fitting] [_run_composite_model] - Preparing for model S0
...
[2017-05-15 15:45:50,577] [INFO] [mot.cl_routines.optimizing.base] [minimize] - Finished optimization preliminaries
[2017-05-15 15:45:50,577] [INFO] [mot.cl_routines.optimizing.base] [minimize] - Starting optimization
Unhandled Python exception
Aborted
Using the same model name twice raises an error here:
MDT/mdt/model_building/trees.py
Line 110 in 79012e8
Dear Robbert,
I am trying to have a go at using the MDT with the provided example data, following along with the great documentation you wrote. However, the program seems to crash whenever I try to do the NODDI estimation in section 3.3.
What happens is that MDT starts processing and after a few seconds the log gives a "CompilerWarning: Non-empty compiler output encountered. Set the environment variable PYOPENCL_COMPILER_OUTPUT=1 to see more" message.
From what I gather from some online searches about the error, it may have something to do with openCL and mac compatibility issues, but I'm hoping you might be familiar with this issue in MDT and could give me some insight whether there is a way to resolve it.
For reference, my specs are below:
OS X Yosemite (indeed, it's ancient, perhaps you have a suggestion for which mac OS seems to be most reliable for MDT?)
CPU used: Intel Xeon-e5-1650 v2.0 @ 3.5GHz
Using Python 3.7.6 and anaconda v2020.20 build py37.0
Many thanks for developing this wonderful toolbox and kind regards,
Roy
We are getting the error AttributeError: module 'pyopencl.cltypes' has no attribute '4'
.
What does this mean?
Full trace:
(mdt_venv) [sgranger@node5 sgranger]$ mdt-model-fit -o /data/sgranger/mdt_venv/bin/ NODDI /data/sgranger/mdt_venv/bin/mdt_example_data/b1k_b2k/b1k_b2k_example_slices_24_38.nii.gz /data/sgranger/mdt_venv/bin/mdt_example_data/b1k_b2k/b1k_b2k.prtcl /data/sgranger/mdt_venv/bin/mdt_example_data/b1k_b2k/b1k_b2k_example_slices_24_38_mask.nii.gz
[2022-10-13 13:42:46,323] [INFO] [mdt.lib.processing.model_fitting] [get_model_fit] - Starting intermediate optimization for generating initialization point.
[2022-10-13 13:42:46,388] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Using MDT version 1.2.6
[2022-10-13 13:42:46,388] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Preparing for model BallStick_r1
[2022-10-13 13:42:46,406] [INFO] [mdt.models.composite] [_prepare_input_data] - No volume options to apply, using all 103 volumes.
[2022-10-13 13:42:46,406] [INFO] [mdt.utils] [estimate_noise_std] - Trying to estimate a noise std.
[2022-10-13 13:42:46,407] [INFO] [mdt.utils] [estimate_noise_std] - Estimated global noise std 19.613178253173828.
[2022-10-13 13:42:46,408] [INFO] [mdt.lib.processing.model_fitting] [_model_fit_logging] - Fitting BallStick_r1 model
[2022-10-13 13:42:46,408] [INFO] [mdt.lib.processing.model_fitting] [_model_fit_logging] - The 4 parameters we will fit are: ['S0.s0', 'w_stick0.w', 'Stick0.theta', 'Stick0.phi']
[2022-10-13 13:42:46,408] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Saving temporary results in /data/sgranger/mdt_venv/bin/BallStick_r1/tmp_results.
Traceback (most recent call last):
File "/data/sgranger/mdt_venv/bin/mdt-model-fit", line 8, in <module>
sys.exit(ModelFit.console_script())
File "/data/sgranger/mdt_venv/lib/python3.9/site-packages/mdt/lib/shell_utils.py", line 47, in console_script
cls().start(sys.argv[1:])
File "/data/sgranger/mdt_venv/lib/python3.9/site-packages/mdt/lib/shell_utils.py", line 66, in start
self.run(args, {})
File "/data/sgranger/mdt_venv/lib/python3.9/site-packages/mdt/cli_scripts/mdt_model_fit.py", line 161, in run
fit_model()
File "/data/sgranger/mdt_venv/lib/python3.9/site-packages/mdt/cli_scripts/mdt_model_fit.py", line 146, in fit_model
mdt.fit_model(args.model,
File "/data/sgranger/mdt_venv/lib/python3.9/site-packages/mdt/__init__.py", line 189, in fit_model
inits = get_optimization_inits(model_name, input_data, output_folder, cl_device_ind=cl_device_ind,
File "/data/sgranger/mdt_venv/lib/python3.9/site-packages/mdt/__init__.py", line 97, in get_optimization_inits
return get_optimization_inits(model_name, input_data, output_folder, cl_device_ind=cl_device_ind,
File "/data/sgranger/mdt_venv/lib/python3.9/site-packages/mdt/lib/processing/model_fitting.py", line 177, in get_optimization_inits
return get_init_data(model_name)
File "/data/sgranger/mdt_venv/lib/python3.9/site-packages/mdt/lib/processing/model_fitting.py", line 115, in get_init_data
fit_results = get_model_fit('BallStick_r1')
File "/data/sgranger/mdt_venv/lib/python3.9/site-packages/mdt/lib/processing/model_fitting.py", line 84, in get_model_fit
results = fit_model(model_name, input_data, output_folder, recalculate=False, use_cascaded_inits=False,
File "/data/sgranger/mdt_venv/lib/python3.9/site-packages/mdt/__init__.py", line 204, in fit_model
fit_composite_model(model_instance, input_data, output_folder, method,
File "/data/sgranger/mdt_venv/lib/python3.9/site-packages/mdt/lib/processing/model_fitting.py", line 301, in fit_composite_model
worker = FittingProcessor(method, model, input_data.mask,
File "/data/sgranger/mdt_venv/lib/python3.9/site-packages/mdt/lib/processing/model_fitting.py", line 349, in __init__
self._kernel_data = self._model.get_kernel_data()
File "/data/sgranger/mdt_venv/lib/python3.9/site-packages/mdt/models/composite.py", line 154, in get_kernel_data
'protocol': self._get_protocol_data_as_var_data(),
File "/data/sgranger/mdt_venv/lib/python3.9/site-packages/mdt/models/composite.py", line 1420, in _get_protocol_data_as_var_data
const_d = {p.name: Array(value, ctype=p.ctype, parallelize_over_first_dimension=False)}
File "/data/sgranger/mdt_venv/lib/python3.9/site-packages/mot/lib/kernel_data.py", line 696, in __init__
self._data = convert_data_to_dtype(self._data, ctype)
File "/data/sgranger/mdt_venv/lib/python3.9/site-packages/mot/lib/utils.py", line 181, in convert_data_to_dtype
scalar_dtype = ctype_to_dtype(data_type, mot_float_type)
File "/data/sgranger/mdt_venv/lib/python3.9/site-packages/mot/lib/utils.py", line 148, in ctype_to_dtype
return getattr(cl_array.vec, vector_type)
File "/data/sgranger/mdt_venv/lib/python3.9/site-packages/pyopencl/array.py", line 210, in __getattr__
return getattr(cltypes, name)
AttributeError: module 'pyopencl.cltypes' has no attribute '4'
(mdt_venv) [sgranger@node5 sgranger]$
Hello!
I'm trying to run your program through mdt-gui; however, after I start the model-fitting running, it crashes with the error:
File "/usr/lib/python3.6/site-packages/mdt/gui/utils.py", line 84, in _decorator
response = dec_func(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/mdt/gui/model_fit/tabs/fit_model_tab.py", line 676, in run
mdt.fit_model(*self._args, **self._kwargs)
File "/usr/lib/python3.6/site-packages/mdt/__init__.py", line 130, in fit_model
results = model_fit.run()
File "/usr/lib/python3.6/site-packages/mdt/model_fitting.py", line 191, in run
_, maps = self._run(self._model, self._recalculate, self._only_recalculate_last)
File "/usr/lib/python3.6/site-packages/mdt/model_fitting.py", line 228, in _run
_in_recursion=new_in_recursion)
File "/usr/lib/python3.6/site-packages/mdt/model_fitting.py", line 228, in _run
_in_recursion=new_in_recursion)
File "/usr/lib/python3.6/site-packages/mdt/model_fitting.py", line 228, in _run
_in_recursion=new_in_recursion)
File "/usr/lib/python3.6/site-packages/mdt/model_fitting.py", line 237, in _run
apply_user_provided_initialization=not _in_recursion)
File "/usr/lib/python3.6/site-packages/mdt/model_fitting.py", line 242, in _run_composite_model
load_balancer=self._cl_runtime_info.load_balancer)):
File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/usr/lib/python3.6/site-packages/mot/configuration.py", line 163, in config_context
config_action.apply()
File "/usr/lib/python3.6/site-packages/mot/configuration.py", line 202, in apply
self._apply()
File "/usr/lib/python3.6/site-packages/mot/configuration.py", line 232, in _apply
set_cl_environments(self._cl_environments)
File "/usr/lib/python3.6/site-packages/mot/configuration.py", line 73, in set_cl_environments
raise ValueError('The list of CL Environments is empty.')
ValueError: The list of CL Environments is empty.
I noticed that kcgthb had a similar looking problem in #5 so I'll try to provide similar information.
[~] : nvidia-smi -L 15:51:09
GPU 0: NVS 300 (UUID: GPU-8bf09968-7f05-facf-4b2c-1ab6ee82491b)
GPU 1: Quadro NVS 295 (UUID: GPU-ae55d10f-2da8-9e6b-0ce1-517f10745f43)
I couldn't find the same benchmark he used, but I ran this to test my pyopencl install (I had to modify print statements to make it compatible with python3 though)
https://gist.github.com/likr/2939374
Trying to get devices from your first response gives
Python 3.6.5 (default, May 11 2018, 04:00:52)
[GCC 8.1.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyopencl as cl
>>> cl.get_platforms()
[<pyopencl.Platform 'NVIDIA CUDA' at 0x564a1fb298a0>]
>>> import mdt
>>> mdt.get_cl_devices()
[]
>>>
Then as requested in the second response
>>> platform=cl.get_platforms()[0]
>>> device = platform.get_devices()[0]
>>> context = cl.Context([device])
>>> print(platform)
<pyopencl.Platform 'NVIDIA CUDA' at 0x55b890a2ccb0>
>>> print(device)
<pyopencl.Device 'NVS 300' on 'NVIDIA CUDA' at 0x55b8909c90c0>
>>> print(context)
<pyopencl.Context at 0x55b8905b4d20 on <pyopencl.Device 'NVS 300' on 'NVIDIA CUDA' at 0x55b8909c90c0>>
>>>
Expanding to the third
>>> env=CLEnvironment(platform,device)
>>> print(env)
GPU - NVS 300 (NVIDIA CUDA)
Fourth
>>> class MockEnvironment(object):
... def __init__(self, platform, device):
... self._platform = platform
... self._device = device
... @property
... def platform(self):
... return self._platform
... @property
... def device(self):
... return self._device
...
>>> class MockContext(object):
... def __init__(self, cl_environment):
... self.context = cl.Context([cl_environment.device])
... self.queue = cl.CommandQueue(self.context, device=cl_environment.device)
...
>>> context = MockContext(MockEnvironment(platform, device))
>>> print(context)
<__main__.MockContext object at 0x7f8f7e0c92e8>
>>>
And I'll stop there because it looks like the remaining part of the post is addressing a bug in that last snipit.
If it's any help, I'm running linux
4.16.8-1-ARCH
and my opencl implementation is opencl-nvidia-340xx from https://www.archlinux.org/packages/extra/x86_64/opencl-nvidia-340xx/
Let me know if there's any other information I can provide that would be of help
Hi. I'm hoping someone could help point me in the right direction. I am new to mdt and using the sample data from mdt.get_example_data.
I am using a really simple call but consistently receive the error: ValueError: The model with the name "NODDI" could not be found.
Is this a problem with the installation-can't find NODDI? I used pip install mdt and can source other functions (can mdt.load.input_data & mdt._create_protocol) but not sure how to proceed using the toolbox with this error.
input_data = mdt.load_input_data(
path+'b1k_b2k_example_slices_24_38',
path+'b1k_b2k.prtcl',
path+'b1k_b2k_example_slices_24_38_mask')
mdt.fit_model('NODDI', input_data, 'C:\Users\sgranger\OneDrive - Mass General Brigham\Desktop\mdt_example_data\b1k_b2k')
Hi all,
I created two new models for microstructure (Model A and Model B).
I would like to implement the following steps:
1 to fit a signal (map) with Model A, to obtain the map of its parameters A.p1, A.p2, A.p3
2 to fit the same signal (map) with Model B, to obtain the map of its parameters B.p1,
B.p2, B.p3, B.p4;
2.1 in this second fit process I need to FIX the map of the parameter B.p1 with the map
of the parameter A.p1.
Is there any way to do this?
Thanks a lot in advance
Hi!
I am trying to install MDT on a Centos7 server. However, I do not find any specific instruction about how to do it. Both procedures for Linux or via Docker give me errors. Could you please give me some hints on how to do it?
thanks in advance,
Rosella
We've used MDT for a number of years with a lot of success, running on workstations with NVIDIA GPUs. Some time back, I'd posted about issues with OpenCL Intel processing not being multi-threaded. While the problem got solved in that we used many threads, the output is never close to what the GPU numbers are. Here's a screenshot showing a GPU-derived image on the left and a CPU / Singularity image on the right:
A bit of decoding -- the arrow is pointing to a section showing the values under the cursor across multiple runs. There are two NVIDIA runs on different machines that give identical results of 0.5777... (good). There's also a run on one of these machines setting the OpenCL device to be device1, which was 'CPU - pthread-Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz (Portable Computing Language)'. This comes very, very close (0.578) and I'll attribute to different floating point units. Now, the GPU run took <1 minute and the CPU one took 934 minutes on a 20-core machine, which was a bit insane, but at least the numbers lined up.
Moving on, though, to attempts to bundle this in Singularity. I've used the supplied script, my own script, and many variants on each. In the end, I get images that are either a constant 0.5 everywhere within the mask or images like the one on the right that show something of the brain, but whose values are way, way off -- 0.0256 in the example here. It's almost as if the values are being cast into the wrong format. We run quickly as the threading works well here, but this is clearly quite wrong. I'd love to be able to fix this, but I've thrown at it what I can think of and hit the wall.
The same subject was run through HCP (4.0.1) Preprocessing followed by both MDT's kurtosis fitting and DKE kurtosis estimation.
Looking at the axial kurtosis maps (I assume "KurtosisTensor.AK.nii.gz" is axial kurtosis), the MDT output has much higher values across the white matter than the corresponding DKE output. Why are the results this different?
The mean kurtosis (MK) maps look similar across MDT and DKE but with a more dark voxels (indicating failed model fitting?) in the MDT output. Is this due to some smoothing or robust fitting that is included in DKE but not MDT?
I ran MDT like so:
mdt-model-fit \
Kurtosis \
/input_dir/${sub}/${ses}/T1w/Diffusion/data.nii.gz \
/input_dir/${sub}/${ses}/T1w/Diffusion/bvecs.prtcl \
/input_dir/${sub}/${ses}/T1w/Diffusion/nodif_brain_mask.nii.gz \
--gradient-deviations /input_dir/${sub}/${ses}/T1w/Diffusion/grad_dev.nii.gz
Dear Robbert,
I have installed MDT on a Windows 10 computer and everything seems to be installed properly. However, I can't get my GPU (Nvidia RTX 2080) to be part of the list of the openCL devices, whereas my CPU (i7-8700k) appears correctly.
I have carefully followed every steps from both the MOT and MDT installation guides, but I can't seem to find a solution for that. Could you help me with this problem?
I'm really looking forward to use MDT on my GPU, as it should save me a lot of time compared to CPU-processing.
Thank you in advance for your help,
Maxime Van Egroo
EDIT: Apparently, it seems that the 'Game-ready' drivers provided by the Nvidia Geforce Experience platform do not include openCL support by default, but switching to 'Studio' drivers solved it, and my GPU now correctly appears in the MDT devices list. Sorry for the trouble!
sudo singularity exec mdt_image_v1 mdt-list-devices
Output Error:
Traceback (most recent call last):
File "/usr/bin/mdt-list-devices", line 9, in
load_entry_point('mdt==1.2.1', 'console_scripts', 'mdt-list-devices')()
File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 542, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 2569, in load_entry_point
return ep.load()
File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 2229, in load
return self.resolve()
File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 2235, in resolve
module = import(self.module_name, fromlist=['name'], level=0)
File "/usr/lib/python3/dist-packages/mdt/init.py", line 8, in
import mot
File "/usr/lib/python3/dist-packages/mot/init.py", line 3, in
from .optimize import minimize, get_minimizer_options
File "/usr/lib/python3/dist-packages/mot/optimize/init.py", line 1, in
from mot.lib.cl_function import SimpleCLFunction
File "/usr/lib/python3/dist-packages/mot/lib/cl_function.py", line 3, in
import tatsu
File "/usr/local/lib/python3.5/dist-packages/tatsu/init.py", line 6, in
from tatsu.tool import ( # pylint: disable=W0622
File "/usr/local/lib/python3.5/dist-packages/tatsu/tool.py", line 16, in
from tatsu.parser import GrammarGenerator
File "/usr/local/lib/python3.5/dist-packages/tatsu/parser.py", line 4, in
from tatsu.bootstrap import EBNFBootstrapParser
File "/usr/local/lib/python3.5/dist-packages/tatsu/bootstrap.py", line 18, in
from tatsu.buffering import Buffer
File "/usr/local/lib/python3.5/dist-packages/tatsu/buffering.py", line 20, in
from .infos import PosLine, LineIndexInfo, LineInfo, CommentInfo
File "/usr/local/lib/python3.5/dist-packages/tatsu/infos.py", line 6, in
from .ast import AST
File "/usr/local/lib/python3.5/dist-packages/tatsu/ast.py", line 73
f'{type(self).name} attributes are fixed. '
Hello,
I'm trying to run the test_example_data.py file under the tests folder and was wondering how long this test should take to run? I've tried running that file a few times (never to completion) but every time I run it, MDT stays hung at this point for a few hours at which point I kill the process because I can't tell if the process is simply hung or not--
Singularity> python3 test_example_data.py
[2023-10-19 13:12:55,642] [INFO] [mdt] [fit_model] - Preparing BallStick_r1 with the cascaded initializations.
[2023-10-19 13:12:55,644] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Using MDT version 1.2.6
[2023-10-19 13:12:55,644] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Preparing for model BallStick_r1
[2023-10-19 13:12:55,660] [INFO] [mdt.models.composite] [_prepare_input_data] - No volume options to apply, using all 103 volumes.
[2023-10-19 13:12:55,660] [INFO] [mdt.utils] [estimate_noise_std] - Trying to estimate a noise std.
[2023-10-19 13:12:55,662] [INFO] [mdt.utils] [estimate_noise_std] - Estimated global noise std 19.613178253173828.
[2023-10-19 13:12:55,662] [INFO] [mdt.lib.processing.model_fitting] [_model_fit_logging] - Fitting BallStick_r1 model
[2023-10-19 13:12:55,662] [INFO] [mdt.lib.processing.model_fitting] [_model_fit_logging] - The 4 parameters we will fit are: ['S0.s0', 'w_stick0.w', 'Stick0.theta', 'Stick0.phi']
[2023-10-19 13:12:55,662] [INFO] [mdt.lib.processing.model_fitting] [fit_composite_model] - Saving temporary results in /tmp/tmpyxp3i3bfmdt_example_data_test/mdt_example_data/b1k_b2k/output/b1k_b2k_example_slices_24_38_mask/BallStick_r1/tmp_results.
/usr/lib/python3/dist-packages/mot/lib/utils.py:148: DeprecationWarning: pyopencl.array.vec is deprecated. Please use pyopencl.cltypes for OpenCL vector and scalar types
return getattr(cl_array.vec, vector_type)
[2023-10-19 13:12:55,785] [INFO] [mdt.lib.processing.processing_strategies] [_process_chunk] - Computations are at 0.00%, processing next 8865 voxels (8865 voxels in total, 0 processed). Time spent: 0:00:00:00, time left: ? (d:h:m:s).
[2023-10-19 13:12:55,785] [INFO] [mdt.lib.processing.model_fitting] [_process] - Starting optimization
[2023-10-19 13:12:55,785] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using MOT version 0.11.3
[2023-10-19 13:12:55,785] [INFO] [mdt.lib.processing.model_fitting] [_process] - We will use a single precision float type for the calculations.
[2023-10-19 13:12:55,786] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using device 'GPU - NVIDIA A100-PCIE-40GB (NVIDIA CUDA)'.
[2023-10-19 13:12:55,786] [INFO] [mdt.lib.processing.model_fitting] [_process] - Using compile flags: ('-cl-denorms-are-zero', '-cl-mad-enable', '-cl-no-signed-zeros')
[2023-10-19 13:12:55,786] [INFO] [mdt.lib.processing.model_fitting] [_process] - We will use the optimizer Powell with optimizer settings {'patience': 2}
/usr/lib/python3/dist-packages/pytools/py_codegen.py:146: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
Thanks!
Hi @robbert-harms,
I appreciate this toolbox and I am having good success using this with ex-vivo mouse brains. I do have a question to ensure that I am doing this "the right way." In the modeling, the noise standard deviation is computed within the mask. If I have previously denoised my data (e.g., with DIPY's patch2self), how does this affect MDT's modeling (particularly for NODDI)?
My general workflow is:
hello
I am sorry if my question is very trivial, I am a beginner and trying to test if the tool works as it should. I think the installation went fine, but when running
python tests/test_example_data.py
I get that both tests are failing, here one of them:
FAIL: test_lls_multishell_b6k_max (__main__.ExampleDataTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tests/test_example_data.py", line 108, in test_lls_multishell_b6k_max
np.testing.assert_allclose(test_values['mean'], np.mean(roi),
File "/scratch/work/eglerean/PSC/MDT/env/lib/python3.8/site-packages/numpy/testing/_private/utils.py", line 1527, in assert_allclose
assert_array_compare(compare, actual, desired, err_msg=str(err_msg),
File "/scratch/work/eglerean/PSC/MDT/env/lib/python3.8/site-packages/numpy/testing/_private/utils.py", line 840, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=0.0001, atol=0
b1k_b2k - CHARMED_r1 - LogLikelihood - mean
Mismatched elements: 1 / 1 (100%)
Max absolute difference: 7595.9631958
Max relative difference: 17.17264368
x: array(-8038.29248)
y: NiftiInfoDecoratedArray(-442.32928, dtype=float32)
Do you know what could be the reason? How can I debug this?
thank you so much!
Enrico
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.