Giter VIP home page Giter VIP logo

pypreprocess's Introduction

Build Status https://coveralls.io/repos/dohmatob/pypreprocess/badge.svg?branch=master https://circleci.com/gh/neurospin/pypreprocess/tree/master.svg?style=shield&circle-token=:circle-token

pypreprocess

pypreprocess is a collection of python scripts for preprocessing fMRI data (motion correction, spatial normalization, smoothing, ...). It provides:

  • the possibility to run processing pipelines using simple text-based configuration-files, allowing to analyse data without programming;
  • automatic generation of html reports, for example for quality assurance (e.g. spatial registration checks), statistical results, etc.;
  • parallel processing of multiple subjects on multi-core machines;
  • persistence of intermediate stages: in case an analysis is interrupted, cached intermediates files are reused to speed up processing;
  • support for precompiled SPM (besides the usual matlab-dependent flavor).

pypreprocess relies on nipype's interfaces to SPM (both precompiled SPM and matlab-dependent SPM flavors). It also has pure-Python (no C extensions, no compiled code, just Python) modules and scripts for slice-timing correction, motion correction, coregistration, and smoothing, without need for nipype or matlab. It has been developed in the Linux environment and is tested with Ubuntu 16 and 18. No guarantees with other OSes.

License

All material is Free Software: BSD license (3 clause).

Important links

Dependencies

  • Python >= 3.5
  • Numpy >= 1.11
  • SciPy >= 0.19
  • Scikit-learn >= 0.19
  • matplotlib >= 1.5.1
  • nibabel >= 2.0.2
  • nipype >= 1.4
  • configobj >= 5.0.6
  • nilearn >= 0.6.2
  • Pandas >= 0.23

Installation

To begin with, you may also want to install the pre-compiled version of SPM (in case you don't have matlab, etc.). From the main pypreprocess directory, run the following:

$ . continuous_integration/install_spm12.sh

Second, install the python packages pip, scipy, pytest, nibabel, scikit-learn, nipype, pandas, matplotlib, nilearn and configobj. If you have a python virtual environment, just run:

$ pip install scipy scikit-learn nibabel nilearn configobj coverage pytest matplotlib pandas nipype

If not, make sure to install pip (run: 'sudo apt-get install python-pip'). If you want to install these locally, use the --user option:

$ pip install scipy scikit-learn nibabel nilearn configobj coverage pytest matplotlib pandas nipype --ignore-installed --user

If you want to install these for all users, use sudo:

$ pip install scipy scikit-learn nibabel nilearn configobj coverage pytest matplotlib pandas nipype --ignore-installed

Finally, install pypreprocess itself by running the following in the pypreprocess:

$ python setup.py install --user

or simply 'python setup.py install' in a virtual environment.

After Installation: Few steps to configure SPM on your own device

There are three cases:

  • If you have used the pypreprocess/continuous_integration/setup_spm.sh or install_spm script, you have nothing to do.

  • If you have matlab and spm installed, then specify the location of your SPM installation directory and export this location as SPM_DIR:

    $ export SPM_DIR=/path/to/spm/installation/dir
    
  • If you have installed a pre-compiled version of SPM then, specify the location of the SPM executable and export as SPM_MCR:

    $ export SPM_MCR=/path/to/spm_mcr_script (script implies spm12.sh)
    

Getting started: pypreprocess 101

Simply cd to the examples/easy_start/ sub-directory and run the following command:

$ python nipype_preproc_spm_auditory.py

If you find nipype errors like "could not configure SPM", this is most likely that the export of SPM_DIR and SPM_MCR (see above) have not been done in this shell.

Layout of examples

We have written some example scripts for preprocessing some popular datasets. The examples directory contains a set of scripts, each demoing an aspect of pypreprocessing. Some scripts even provide use-cases for the nipy-based GLM. The examples use publicly available sMRI and fMRI data. Data fetchers are based on the nilearn API. The main examples scripts can be summarized as follows:

Very easy examples

  • examples/easy_start/nipype_preproc_spm_auditory.py: demos preprocessing + first-level GLM (using nipy) on the single-subject SPM auditory dataset.
  • examples/easy_start/nipype_preproc_spm_haxby.py: preprocessing of the 'Haxby2001' visual recognition task fMRI dataset.

More advanced examples

  • examples/pipelining/nipype_preproc_spm_multimodal_faces.py: demos preprocessing + first-level fixed-effects GLM on R. Henson's multi-modal face dataset (multiple sessions)
  • examples/pipelining/nistats_glm_fsl_feeds_fmri.py: demos preprocessing + first-level GLM on FSL FEEDS dataset using nistats python package.

Examples using pure Python (no SPM, FSL, etc. required)

  • examples/pure_python/slice_timing_demos.py, examples/pure_python/realign_demos.py, examples/pure_python/coreg_demos.py: demos Slice-Timing Correction (STC), motion-correction, and coregistration on various datasets, using modules written in pure Python
  • examples/pure_python/pure_python_preproc_demo.py: demos intra-subject preprocessing using pure Python modules, on single-subject SPM auditory dataset

Using .ini configuration files to specify pipeline

It is possible (and recommended) to configure the preprocessing pipeline just by copying any of the .ini configuration files under the examples sub-directory and modifying it (usually, you only need to modify the dataset_dir parameter), and then run:

$ python pypreprocess.py your.ini

For example,:

$ python pypreprocess.py examples/easy_start/spm_auditory_preproc.ini

Pipelines

We have put in place two main pipelines for preprocessing: the standard pipeline, and the DARTEL-based pipeline. In the end of either method, each subject's EPI data has been corrected for artefacts, and placed into the same reference space (MNI). When you invoke the do_subjects_preproc(..) API of [nipype_preproc_spm_utils.py](https://github.com/neurospin/pypreprocess/blob/master/pypreprocess/nipype_preproc_spm_utils.py) to preprocess a dataset (group of subjects), the default pipeline used is the standard one; passing the option do_dartel=True forces the DARTEL-based pipeline to be used. Also you can fine-tune your pipeline using the the various supported parameters in you .ini file (see the examples/ subdirectory for examples).

Standard pipeline

For each subject, the following preprocessing steps are undergone:

  • Motion correction is done to estimate and correct for subject's head motion during the acquisition.
  • The subject's anatomical image is coregistered against their fMRI images (precisely, to the mean thereof). Coregistration is important as it allows deformations of the anatomy to be directly applicable to the fMRI, or for ROIs to be defined on the anatomy.
  • Tissue Segmentation is then employed to segment the anatomical image into GM, WM, and CSF compartments by using TPMs (Tissue Probability Maps) as priors.
  • The segmented anatomical image are then warped into the MNI template space by applying the deformations learned during segmentation. The same deformations have been applied to the fMRI images.

DARTEL pipeline

Motion correction, and coregistration go on as for the standard pipeline. The only difference between the DARTEL pipeline and the standard one is the way the subject EPI are warped into MNI space.

In the "Dartel pipeline", SPM's [DARTEL](http://www.fil.ion.ucl.ac.uk/spm/software/spm8/SPM8_Release_Notes.pdf) is used to warp subject brains into MNI space.

  • The idea is to register images by computing a “flow field” which can then be “exponentiated” to generate both forward and backward deformations. Processing begins with the “import” step. This involves taking the parameter files produced by the segmentation (NewSegment), and writing out rigidly transformed versions of the tissue class images, such that they are in as close alignment as possible with the tissue probability maps.
  • The next step is the registration itself. This involves the simultaneous registration of e.g. GM with GM, WM with WM and 1-(GM+WM) with 1-(GM+WM) (when needed, the 1- (GM+WM) class is generated implicitly, so there is no need to include this class yourself). This procedure begins by creating a mean of all the images, which is used as an initial template. Deformations from this template to each of the individual images are computed, and the template is then re-generated by applying the inverses of the deformations to the images and averaging. This procedure is repeated a number of times.
  • Finally, warped versions of the images (or other images that are in alignment with them) can be generated.

[nipype_preproc_spm_abide.py](https://github.com/neurospin/pypreprocess/blob/master/scripts/abide_preproc.py) is a script which uses this pipeline to preprocess the [ABIDE](http://fcon_1000.projects.nitrc.org/indi/abide/).

Intra-subject preprocessing in pure Python (with no compiled code, etc.)

A couple of modules for intra-subject preprocessing (slice-timing correction, motion-correction, coregistration, etc.) in pure (only using builtins and numpy/scipy official stuff, no compiled code, no wrappers) Python have been implemented. To demo this feature, simply run the following command:

$ python examples/pure_python/pure_python_preproc_demo.py

Development

You can check the latest version of the code with the command:

$ git clone git://github.com/neurospin/pypreprocess.git

or if you have write privileges:

$ git clone [email protected]:neurospin/pypreprocess.git
  • whitespaces in the directory name for the variable 'scratch' triggers a bug in nipype and results in a crash (have not tested if this also occur for other path variables)
  • when using an 'ini' file, say 'mytest.ini', with ''python preprocessing.py mytest.ini'', there can be a conflict between pypreprocess.py and the pypreprocess module (solution: rename pypreprocess.py into something like pypreprocini.py)
  • the cache is not relocatable (because joblib encode the absolute paths): if you are forced to move the cache -- e.g. because of lack of space on a filesystem -- use a symbolic link to let the system believe that the cache is still at the original location.

pypreprocess's People

Contributors

agramfort avatar agrigis avatar alexandreabraham avatar alpinho avatar banilo avatar bthirion avatar chrisgorgo avatar chrplr avatar dagra avatar dohmatob avatar eickenberg avatar erramuzpe avatar gaelvaroquaux avatar glemaitre avatar hororohoruru avatar iglpdc avatar jbpoline avatar jeromedockes avatar kamalakerdadi avatar kchawla-pi avatar lesteve avatar man-shu avatar mrahim avatar nicolasgensollen avatar swaythe avatar sylvainlan avatar takhs91 avatar virgilefritsch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pypreprocess's Issues

Weird condition in len(sessions)

In subject_data.py, l.160-165

if self.session_id is None:
if len(self.func) < 10:
self.session_id = ["session_%i" % i
for i in xrange(len(self.func))]
else:
self.session_id = ["session_0"]

If I get it correctly It is consider that scans represent different sessions when there are less than 10 of them. However, it happens that you acquire more than 10 blocks on the same subject.
I suggest

  1. Quick fix: use a high threhold: maybe 30 ?
  2. real fix: a better way to assess that images correspond to block, e.g., are they 4-dimensional ?

[REPORT] ENH: Do not update when page is not visible

With jQuery, it is not difficult to see if the page is visible. If it is not, there is no reason to update it. An immediate update is however welcome as soon as the user get back on the page. I raise this issue because I had browser warning about "script slowing down the browser" due this background activity, which is quite annoying.

IndexError: list index out of range in nipy/nipy/labs/viz_tools/coord_tools.py

This is triggered when preprocessing openfmri. I am intensionally not giving any details to reproduce this.

[Memory] Calling check_preprocessing.plot_cv_tc...
plot_cv_tc('/vaporific/edohmato/pypreprocess_runs/openfmri/ds107/sub001/cache_dir/nipype_mem/nipype-interfaces-spm-preprocess-Normalize/0373e12bc04f5e6fde8be91db678bcb3/wrbold.nii',
'task001_run001', 'sub001', '/vaporific/edohmato/pypreprocess_runs/openfmri/ds107/sub001', cv_tc_plot_outfile='/vaporific/edohmato/pypreprocess_runs/openfmri/ds107/sub001/cv_tc_after.png', plot_diff=True)
[Memory] 0.0s, 0.0min: Loading nibabel.funcs.concat_images...
_______________________________________concat_images cache loaded - 0.1s, 0.0min


[Memory] Calling nibabel.loadsave.save...
save(<nibabel.nifti1.Nifti1Image object at 0x526e550>, '/vaporific/edohmato/pypreprocess_runs/openfmri/ds107/sub001/cache_dir/nipype_mem/nipype-interfaces-spm-preprocess-Normalize/0373e12bc04f5e6fde8be91db678bcb3/fourD_func.nii')
_____________________________________________________________save - 0.0s, 0.0min
[Memory] 0.0s, 0.0min: Loading check_preprocessing.compute_cv...
__________________________________________compute_cv cache loaded - 0.0s, 0.0min
Traceback (most recent call last):
File "nipype_preproc_spm_openfmri_ds107.py", line 149, in
results = main(DATA_DIR, OUTPUT_DIR)
File "nipype_preproc_spm_openfmri_ds107.py", line 124, in main
report_filename=report_filename
File "/home/edohmato/CODE/FORKED/pypreprocess/nipype_preproc_spm_utils.py", line 1369, in do_group_preproc
subject_data, *_kwargs) for subject_data in subjects)
File "/home/edohmato/CODE/FORKED/pypreprocess/src/joblib/joblib/parallel.py", line 513, in call
self.dispatch(function, args, kwargs)
File "/home/edohmato/CODE/FORKED/pypreprocess/src/joblib/joblib/parallel.py", line 321, in dispatch
job = ImmediateApply(func, args, kwargs)
File "/home/edohmato/CODE/FORKED/pypreprocess/src/joblib/joblib/parallel.py", line 145, in init
self.results = func(_args, *_kwargs)
File "/home/edohmato/CODE/FORKED/pypreprocess/nipype_preproc_spm_utils.py", line 1068, in do_subject_preproc
plot_diff=True)
File "/home/edohmato/CODE/FORKED/pypreprocess/src/joblib/joblib/memory.py", line 163, in call
return self.call(_args, *_kwargs)
File "/home/edohmato/CODE/FORKED/pypreprocess/src/joblib/joblib/memory.py", line 316, in call
output = self.func(_args, *_kwargs)
File "/home/edohmato/CODE/FORKED/pypreprocess/check_preprocessing.py", line 173, in plot_cv_tc
title=title,
File "/home/edohmato/CODE/FORKED/pypreprocess/src/nipy/nipy/labs/viz_tools/activation_maps.py", line 220, in plot_map
slicer.plot_map(map, affine, *_kwargs)
File "/home/edohmato/CODE/FORKED/pypreprocess/src/nipy/nipy/labs/viz_tools/slicers.py", line 343, in plot_map
self._map_show(map, affine, type='imshow', **kwargs)
File "/home/edohmato/CODE/FORKED/pypreprocess/src/nipy/nipy/labs/viz_tools/slicers.py", line 374, in _map_show
get_mask_bounds(not_mask, affine)
File "/home/edohmato/CODE/FORKED/pypreprocess/src/nipy/nipy/labs/viz_tools/coord_tools.py", line 134, in get_mask_bounds
x_slice, y_slice, z_slice = ndimage.find_objects(mask)[0]
IndexError: list index out of range

Memory leaks ?

I am preprocessing ABIDE using pypreprocess and after preprocessing about 50 subjects, I get a MemoryError. If I run the script again, it skips the subjects already preprocessed and is back in business for 50 more subjects, before getting a MemoryError. I'm now keeping an eye on memory consumption and I can see that it is slowly increasing.

Does anybody had the same problem ?

realignment breaks when n_sess > 6

Because no more than 6 parameter files are output.
The culprit seems to be line 514. However, changing

self.realignment_parameters_ = np.array(
self.realignment_parameters_)[..., :6]

to

self.realignment_parameters_ = np.array(
self.realignment_parameters_)[..., :self.n_sessions]

breaks the unit tests.

UnboundLocalError: local variable 'apply_to_files' referenced before assignment

Traceback (most recent call last):
File "nipype_preproc_spm_auditory.py", line 49, in
last_stage=True
File "/volatile/depot_pypreprocess/pypreprocess/pypreprocess/nipype_preproc_spm_utils.py", line 1327, in do_subjects_preproc
) for subject_data in subjects)
File "/volatile/depot_pypreprocess/pypreprocess/src/joblib/joblib/parallel.py", line 514, in call
self.dispatch(function, args, kwargs)
File "/volatile/depot_pypreprocess/pypreprocess/src/joblib/joblib/parallel.py", line 311, in dispatch
job = ImmediateApply(func, args, kwargs)
File "/volatile/depot_pypreprocess/pypreprocess/src/joblib/joblib/parallel.py", line 135, in init
self.results = func(_args, *_kwargs)
File "/volatile/depot_pypreprocess/pypreprocess/pypreprocess/nipype_preproc_spm_utils.py", line 979, in do_subject_preproc
do_report=do_report
File "/volatile/depot_pypreprocess/pypreprocess/pypreprocess/nipype_preproc_spm_utils.py", line 509, in _do_subject_normalize
write_voxel_sizes = get_vox_dims(apply_to_files)UnboundLocalError: local variable 'apply_to_files' referenced before assignment

Output all important files to a single directory

Currently it seems to me that output files are left in the cache directory. It makes it hard to organize the work, as the paths can only be accessed via a hash. It would be good to output the interesting files (e.g. those that have been resampled :) to an output directory. To avoid copying, this could be done with a hard link on unices.

unique file name for sessions

whenever the filename is not unique but the directory changes, as in:

session_1_func = BOLD/task001_run00*/bold.nii.gz

the output is then overwritten by the different images, outputting a concatenation and stuff similar to

rp_plot_1

[Doc] Code badly needs documenting

  • Functions, classes, etc. should have comprehensive docstrings
  • The wiki for this project is nearly blank! Do the write-up progressively.

do-report breaks due to an attempt to display a 4D fMRI image

Maybe you shuold coregister also the mean functional image created by realign and use it for display ?
Thanks !


ValueError Traceback (most recent call last)
/usr/lib/python2.7/dist-packages/IPython/utils/py3compat.pyc in execfile(fname, *where)
173 else:
174 filename = fname
--> 175 builtin.execfile(filename, *where)

/volatile/thirion/mygit/mathematicians/do_preprocessing.py in ()
45 do_subject_preproc(preproc_dict, concat=False, do_coreg=True,
46 do_stc=False, do_cv_tc=True, do_realign=True,
---> 47 do_report=True)

/home/bt206016/.local/lib/python2.7/site-packages/pypreprocess/purepython_preproc_utils.pyc in do_subject_preproc(subject_data, verbose, do_caching, do_stc, interleaved, slice_order, do_realign, do_coreg, coreg_func_to_anat, do_cv_tc, fwhm, write_output_images, concat, do_report, parent_results_gallery, shutdown_reloaders)
297 # generate coreg QA thumbs

298             subject_data.generate_coregistration_thumbnails(

--> 299 coreg_func_to_anat=coreg_func_to_anat)
300
301 # garbage collection

/home/bt206016/.local/lib/python2.7/site-packages/pypreprocess/subject_data.pyc in generate_coregistration_thumbnails(self, coreg_func_to_anat, log, comment)
420 else None,
421 results_gallery=self.results_gallery,
--> 422 comment=comment
423 )
424

/home/bt206016/.local/lib/python2.7/site-packages/pypreprocess/reporting/preproc_reporter.pyc in generate_coregistration_thumbnails(target, source, output_dir, execution_log_html_filename, results_gallery, comment)
551 output_dir,
552 execution_log_html_filename=execution_log_html_filename,
--> 553 results_gallery=results_gallery,
554 )
555

/home/bt206016/.local/lib/python2.7/site-packages/pypreprocess/reporting/preproc_reporter.pyc in generate_registration_thumbnails(target, source, procedure_name, output_dir, execution_log_html_filename, results_gallery)
417 title="Outline of %s on %s" % (
418 target[1],
--> 419 source[1],
420 ))
421

/home/bt206016/.local/lib/python2.7/site-packages/joblib-0.6.5-py2.7.egg/joblib/memory.pyc in call(self, _args, *_kwargs)
169 'directory %s'
170 % (name, argument_hash, output_dir))
--> 171 return self.call(_args, *_kwargs)
172 else:
173 try:

/home/bt206016/.local/lib/python2.7/site-packages/joblib-0.6.5-py2.7.egg/joblib/memory.pyc in call(self, _args, *_kwargs)
321 if self._verbose:
322 print self.format_call(_args, *_kwargs)
--> 323 output = self.func(_args, *_kwargs)
324 self._persist_output(output, output_dir)
325 duration = time.time() - start_time

/home/bt206016/.local/lib/python2.7/site-packages/pypreprocess/reporting/check_preprocessing.pyc in plot_registration(reference_img, coregistered_img, title, cut_coords, slicer, cmap, output_filename)
249 # plot the coregistered image

250     if hasattr(coregistered_img, '__len__'):

--> 251 coregistered_img = load_specific_vol(coregistered_img, 0)[0]
252 # XXX else i'm assuming a nifi object ;)

253     coregistered_data = coregistered_img.get_data()

/home/bt206016/.local/lib/python2.7/site-packages/pypreprocess/io_utils.pyc in load_specific_vol(vols, t, strict)
91 if isinstance(vols, list):
92 n_scans = len(vols)
---> 93 vol = load_vol(vols[t])
94 elif is_niimg(vols) or isinstance(vols, basestring):
95 _vols = nibabel.load(vols) if isinstance(vols, basestring) else vols

/home/bt206016/.local/lib/python2.7/site-packages/pypreprocess/io_utils.pyc in load_vol(x)
66 else:
67 raise ValueError(
---> 68 "Each volume must be 3D; got shape %s" % str(vol.shape))
69 elif len(vol.shape) != 3:
70 raise ValueError(

ValueError: Each volume must be 3D; got shape (128, 128, 80, 185)

In [2]: %debug

/home/bt206016/.local/lib/python2.7/site-packages/pypreprocess/io_utils.py(68)load_vol()
67 raise ValueError(
---> 68 "Each volume must be 3D; got shape %s" % str(vol.shape))
69 elif len(vol.shape) != 3:

ImportError: No module named external

Traceback (most recent call last):
File "spm_multimodal_fmri.py", line 22, in
from pypreprocess.nipype_preproc_spm_utils import (do_subject_preproc,
File "/home/elvis/.local/lib/python2.7/site-packages/pypreprocess/nipype_preproc_spm_utils.py", line 18, in
from slice_timing import get_slice_indices
File "/home/elvis/.local/lib/python2.7/site-packages/pypreprocess/slice_timing.py", line 13, in
from .io_utils import (load_specific_vol,
File "/home/elvis/.local/lib/python2.7/site-packages/pypreprocess/io_utils.py", line 18, in
from .external import joblib
ImportError: No module named external

joblib 0.8.0a3 bug: AttributeError: 'NDArrayWrapper' object has no attribute 'shape'

This bug is not present in 0.8.0a2. I'm failing-over to this retro version. I can re-raise the issue in the joblib community.

Traceback (most recent call last): File "purepython_preproc_demo.py", line 27, in <module> fwhm=[8] * 3 File "/home/elvis/.local/lib/python2.7/site-packages/pypreprocess/purepython_preproc_utils.py", line 200, in do_subject_preproc prefix=func_prefix)) File "/home/elvis/.local/lib/python2.7/site-packages/pypreprocess/external/joblib/memory.py", line 170, in __call__ out = self.call(*args, **kwargs) File "/home/elvis/.local/lib/python2.7/site-packages/pypreprocess/external/joblib/memory.py", line 334, in call output = self.func(*args, **kwargs) File "/home/elvis/.local/lib/python2.7/site-packages/pypreprocess/slice_timing.py", line 535, in transform self.output_data_ = STC.transform(self, raw_data=raw_data) File "/home/elvis/.local/lib/python2.7/site-packages/pypreprocess/slice_timing.py", line 356, in transform N = self.kernel_.shape[-1] AttributeError: 'NDArrayWrapper' object has no attribute 'shape'

Moving/renaming preproc dirs

Would it be possible to move around a preprocessing result directory without breaking anything? At the moment the paths in the HTML reports are all absolute and prevent that. I don't know if there are other absolute paths elsewhere, and if the hardlinks would be an issue.

[Doc] Readme

The pypreprocess project needs a Readme that describes what it is.

REFACTOR things into packages (in the python sense)

Upon @Gael's request:

  • turn reporting, coreutils, external, and algorithms into packages (in the python sense)
  • these packages should be flat, i.e without deep directory structure (the opposite of the algorithms directory at the moment)

Reporting broken when running "purepython_preproc_demo.py"

I get the following traceback:

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
/home/varoquau/dev/ipython/IPython/utils/py3compat.pyc in execfile(fname, *where)
    202             else:
    203                 filename = fname
--> 204             __builtin__.execfile(filename, *where)

/home/varoquau/dev/pypreprocess/examples/purepython_preproc_demo.py in ()
     29 
     30 # run glm on data
---> 31 execute_spm_auditory_glm(subject_data)

/home/varoquau/dev/pypreprocess/pypreprocess/_spike/pipeline_comparisons.pyc in execute_spm_auditory_glm(data, reg_motion)
    145         frametimes=frametimes,
    146         drift_model=drift_model,
--> 147         hrf_model=hrf_model,
    148         )
    149 

/home/varoquau/dev/pypreprocess/pypreprocess/reporting/glm_reporter.pyc in generate_subject_stats_report(stats_report_filename, contrasts, z_maps, mask, design_matrices, subject_id, anat, anat_affine, slicer, cut_coords, statistical_mapping_trick, threshold, cluster_th, cmap, start_time, title, user_script_name, progress_logger, shutdown_all_reloaders, **glm_kwargs)
    298             title += " for subject %s" % subject_id
    299 
--> 300     level1_html_markup = base_reporter.get_subject_report_stats_html_template(
    301         ).substitute(
    302         title=title,

/home/varoquau/dev/pypreprocess/pypreprocess/reporting/base_reporter.pyc in get_subject_report_stats_html_template(**kwargs)
    638                 ROOT_DIR, 'template_reports',
    639                 'subject_report_stats_template.tmpl.html'),
--> 640                          **kwargs)
    641 
    642 

/home/varoquau/dev/pypreprocess/pypreprocess/reporting/base_reporter.pyc in _get_template(template_file, **kwargs)
    594         fd.close()
    595 
--> 596         return HTMLTemplate(_text).substitute(**kwargs)
    597 
    598 

/home/varoquau/dev/pypreprocess/pypreprocess/tempita/__init__.pyc in substitute(self, *args, **kw)
    171         if self.namespace:
    172             ns.update(self.namespace)
--> 173         result, defs, inherit = self._interpret(ns)
    174         if not inherit:
    175             inherit = self.default_inherit

/home/varoquau/dev/pypreprocess/pypreprocess/tempita/__init__.pyc in _interpret(self, ns)
    182         parts = []
    183         defs = {}
--> 184         self._interpret_codes(self._parsed, ns, out=parts, defs=defs)
    185         if '__inherit__' in defs:
    186             inherit = defs.pop('__inherit__')

/home/varoquau/dev/pypreprocess/pypreprocess/tempita/__init__.pyc in _interpret_codes(self, codes, ns, out, defs)
    210                 out.append(item)
    211             else:
--> 212                 self._interpret_code(item, ns, out, defs)
    213 
    214     def _interpret_code(self, code, ns, out, defs):

/home/varoquau/dev/pypreprocess/pypreprocess/tempita/__init__.pyc in _interpret_code(self, code, ns, out, defs)
    230         elif name == 'expr':
    231             parts = code[2].split('|')
--> 232             base = self._eval(parts[0], ns, pos)
    233             for part in parts[1:]:
    234                 func = self._eval(part, ns, pos)

/home/varoquau/dev/pypreprocess/pypreprocess/tempita/__init__.pyc in _eval(self, code, ns, pos)
    292         try:
    293             try:
--> 294                 value = eval(code, self.default_namespace, ns)
    295             except SyntaxError, e:
    296                 raise SyntaxError(

 in ()

NameError: name 'start_time' is not defined at line 33 column 16

openfmri/ is probably broken

openfmri/ is probably broken. This has not been maintained for a while now. Chances are that it is certainly broken now. I don't have local data right now to run the scripts.

Reporting mismatch & output names

The report_preproc.html for a specific subject provides a set of thumbnails that correspond to the several pre-processing steps. When I click on the "execution log" link below the normalization of the anat for example, I expect to find anat images in the output section of the log but instead only find the pre-processed timeseries, namely wrdeleteorient_bold.nii, wrdeleteorient_bold_c0000.nii, and wrdeleteorient_bold_c0001.nii.

We need better name conventions for our output files: having deleteorient everywhere is not very informative, @GaelVaroquaux also suggested to include the original input file name in the processed images to keep a trace of where the files are from.

[NISL] Abide fetching

Some subjects are not fetched by fetch_abide:

  • KKI 50813 is a subject present in the CSV but not in the files
  • UCLA {51232, 51233, 51242-51247, 51270, 51310} because they have no anat
  • OHSU * because they have several functional scans (1 to 3 sessions)

This has been done for the sake of simplicity. OHSU can be simply added (we can take a random session, or merge the 3 sessions). I don't know what to do for UCLA as it may induce putting special cases for preprocessing.

What do you think about that ?

Multiple tiny issues raised by Philippe with reporting package

** Sur external/tempita

  • Le dépôt mercurial de tempita est présent dans ta branche (tempita/.hg). Il me paraît souhaitable de l'enlever avant de faire une pull request.

** Sur pypreprocess

  • La structure générale du package n'est pas très claire. Ça pourrait valoir le coup d'ajouter un init.py au niveau racine ou de bien faire la distinction entre exemples et code de pyprocess (par ex. io_util.py qui semble générique (au moins en partie) et nipy_glm_spm_auditory.py qui ne l'est pas du tout).
  • J'aime bien l'idée derrière la classe ProgressReport, mais le travail n'est fait qu'à moitié. Je m'explique : glm_reporter.generate_subject_stats_report() prend en paramètres stats_report_filename ET progress_logger, qui contient déjà une référence à stats_report_filename. Pour que le système fonctionne, il est impératif que les deux soient cohérents, ce qui n'est pas vraiment clair dans la documentation, et qui n'apparaît pas simplement dans le code (j'ai passé un moment dessus avant de savoir ce que faisait ProgressLogger). Une solution pour résoudre le problème et réduire le nombre d'arguments pris par generate_subjects_stats_report est de ne passer que progress_logger, et de faire en sorte que cet objet puisse être utilisé pour écrire directement dans le fichier, plutôt que de le réouvrir. En gros, remplacer les lignes glm_reporter.py:268-270 :

with open(stats_report_filename, 'w') as fd:
fd.write(str(level1_html_markup))
fd.close()

par quelque chose du genre.

progress_logger.write(str(level1_html_markup))

Ce qui suit, ce sont des choses moins importantes :

  • io_utils.py:193 dans compute_mean_image() : je n'ai pas bien compris l'intérêt de faire la moyenne des affines. On va obtenir n'importe quoi si des affines diffèrent. De plus, tout charger avant d'appeler np.mean() est un peu violent : le facteur limitant étant le chargement des données depuis le disque, on peut charger les fichiers un par un et accumuler les valeurs au fur et à mesure, ce qui évite de nécessiter des quantités extravagantes de mémoire.

  • nipype_preproc_spm_utils.py:89 la classe SubjectData pourrait prendre en paramètres anat, func et output_dir, ce qui éviterait d'avoir à les définir après coup, comme ce qui est fait dans nipy_glm_spm_auditory.py:61 (ça me paraît plus "pythonique").

  • reporter/base_reporter.py:167 le test sur self.report_filename est inutile. Il y a également deux "with" emboités qui peuvent être séparés. Le même fichier est ouvert successivement en lecture, puis en écriture.

  • reporter/base_reporter.py:38,47,56. trois classes vides. Je comprends l'intérêt pour la lecture du code (ça ajoute de l'information), mais dans le même temps, ça augmente le nombre de lignes pour rien. Écrire thumb = tempita.bunch() à la place de thumb = Thumbnail() [voir reporter/glm_reporter.py:299] est suffisamment explicite, je trouve. S'il faut garder ces classes, alors leur mettre au moins une majuscule (Img à la place de img).

  • reporter/base_reporter.py: la série de fonctions qui renvoient les templates me fait un peu tiquer. Il y a beaucoup de duplication de code, et ce n'est pas super cohérent avec ce que je connais des règles de casse. Pour moi, un nom tout en capitale correspond à une constante, pas une fonction. Une structure alternative pourrait être ou bien une unique fonction prenant le nom du template et renvoyant le texte. Par exemple get_template("GALLERY_HTML_MARKUP") à la place de GALLERY_HTML_MARKUP(). Cette technique permet de factoriser tout le code relativement simplement. Ou bien on peut définir directement les templates, par ex. GALLERY_HTML_MARKUP = get_template("GALLERY_HTML_MARKUP") au début du fichier. Je préfêre la première solution car elle évite d'ouvrir un grand nombre de fichiers en avance.

  • reporter/glm_reporter.py:312. Remplacer

    for j in xrange(len(contrasts)):
    contrast_id = contrasts.keys()[j]
    contrast_val = contrasts[contrast_id]
    z_map = z_maps[contrast_id]

par

for contrast_id, contrast_val in contrasts.iteritems():
    z_map = z_maps[contrast_id]

puisque j n'est pas utilisé par ailleurs (à moins que "contrasts" ne soit pas un vrai dict).

  • reporter/glm_reporter.py:82 on peut remplacer tous les appels à write par un seul, avec une chaîne multi-ligne. """ """, il me semble que ce serait un peu plus clair. Pendant qu'on y est, on peut remplacer l'utilisation de % par .format(), qui permet de se rapprocher d'un système de template html (voir aussi ligne 121, où il y a plusieurs paramètres)

nipype.interfaces.spm.Realign (and possibly other such interfaces) dreadfully dreadfully slow!!

nipype.interfaces.spm.Realign (and possibly other such interfaces) dreadfully dreadfully slow!!

Indeed, it seems the (matlab batch) pyscript_realign.m script generated by nipype is very poorly configured.

For the same, elementary task (single subject, single fmri image motion correction), the SPM8 gui runs in 5 seconds or so, whilst the nipype version takes about 25 minutes!!!

To solve this, we should optimize the generated pyscript_realign.m file.

IndexError

I've updated to the latest pypreprocess and I get an IndexError trying to process the mixed-gambles dataset. This was working fine with f5a5ec6 so it looks that it's something that changed between both commits

Here is the full error log:

/home/fp985994/src/nipy/nipy/labs/glm/glm.py:6: RuntimeWarning: compiletime version 2.6 of module 'nipy.labs.glm.kalman' does not match runtime version 2.7
from . import kalman
/home/fp985994/src/nipy/nipy/labs/glm/glm.py:6: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility
from . import kalman
/home/fp985994/src/nipy/nipy/labs/glm/glm.py:6: RuntimeWarning: numpy.flatiter size changed, may indicate binary incompatibility
from . import kalman
/home/fp985994/src/nipy/nipy/labs/utils/init.py:3: RuntimeWarning: compiletime version 2.6 of module 'nipy.labs.utils.routines' does not match runtime version 2.7
from .routines import (quantile, median, mahalanobis, svd, permutations,
/home/fp985994/src/nipy/nipy/labs/utils/init.py:3: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility
from .routines import (quantile, median, mahalanobis, svd, permutations,
/home/fp985994/src/nipy/nipy/labs/utils/init.py:3: RuntimeWarning: numpy.flatiter size changed, may indicate binary incompatibility
from .routines import (quantile, median, mahalanobis, svd, permutations,
/home/fp985994/src/nipy/nipy/algorithms/statistics/init.py:8: RuntimeWarning: compiletime version 2.6 of module 'nipy.algorithms.statistics.intvol' does not match runtime version 2.7
from . import intvol, rft, onesample, formula
/home/fp985994/src/nipy/nipy/algorithms/statistics/init.py:8: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility
from . import intvol, rft, onesample, formula
/home/fp985994/src/nipy/nipy/algorithms/statistics/init.py:8: RuntimeWarning: numpy.flatiter size changed, may indicate binary incompatibility
from . import intvol, rft, onesample, formula
/home/fp985994/src/nipy/nipy/algorithms/statistics/init.py:9: RuntimeWarning: compiletime version 2.6 of module 'nipy.algorithms.statistics._quantile' does not match runtime version 2.7
from ._quantile import _quantile as quantile, _median as median
/home/fp985994/src/nipy/nipy/algorithms/statistics/init.py:9: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility
from ._quantile import _quantile as quantile, _median as median
/home/fp985994/src/nipy/nipy/algorithms/statistics/init.py:9: RuntimeWarning: numpy.flatiter size changed, may indicate binary incompatibility
from ._quantile import _quantile as quantile, _median as median
/home/fp985994/src/nipy/nipy/algorithms/registration/resample.py:11: RuntimeWarning: compiletime version 2.6 of module 'nipy.algorithms.registration._registration' does not match runtime version 2.7
from ._registration import (_cspline_transform,
/home/fp985994/src/nipy/nipy/algorithms/registration/resample.py:11: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility
from ._registration import (_cspline_transform,
/home/fp985994/src/nipy/nipy/algorithms/registration/resample.py:11: RuntimeWarning: numpy.flatiter size changed, may indicate binary incompatibility
from ._registration import (_cspline_transform,
Traceback (most recent call last):
File "/home/fp985994/dev/pypreprocess/pypreprocess.py", line 21, in
do_subjects_preproc(sys.argv[1])
File "/home/fp985994/dev/pypreprocess/pypreprocess/nipype_preproc_spm_utils.py", line 1689, in do_subjects_preproc
) for subject_data in subjects)
File "/home/fp985994/envs/p27/local/lib/python2.7/site-packages/joblib/parallel.py", line 651, in call
self.retrieve()
File "/home/fp985994/envs/p27/local/lib/python2.7/site-packages/joblib/parallel.py", line 534, in retrieve
raise exception_type(report)
joblib.my_exceptions.JoblibIndexError: JoblibIndexError
___________________________________________________________________________
Multiprocessing exception:
...........................................................................
/home/fp985994/dev/pypreprocess/pypreprocess.py in ()
16 print ("Example:\r\npython %s scripts/HCP_tfMRI_MOTOR_preproc"
17 ".ini\r\n") % sys.argv[0]
18 sys.exit(1)
19
20 # consume config file and run pypreprocess back-end
---> 21 do_subjects_preproc(sys.argv[1])
22
23
24
25

...........................................................................
/home/fp985994/dev/pypreprocess/pypreprocess/nipype_preproc_spm_utils.py in do_subjects_preproc(subject_factory=[{'anat': '/volatile/fabian/data/ds005/sub001/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub002/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub003/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub004/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub005/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub006/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub007/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub008/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub009/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub010/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub011/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub012/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub013/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub014/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub015/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub016/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}], output_dir='/volatile/fabian/data/ds005/pypreprocess_output', hardlink_output=True, n_jobs=3, caching=True, dartel=True, func_write_voxel_sizes=[3, 3, 3], anat_write_voxel_sizes=None, report=True, dataset_id='/volatile/fabian/data/ds005', dataset_description="\n<p>The NYU CSC TestRetest resource includes EPI...tp://www.nitrc.org/projects/nyu_trt\n/'>here</a>.\n", prepreproc_undergone='', shutdown_reloaders=True, dataset_dir=None, spm_dir=None, matlab_exec=None, **preproc_params={'TA': 'TR * (1 - 1 / nslices)', 'TR': 2.0, 'anat_write_voxel_sizes': [1, 1, 1], 'caching': True, 'coreg_anat_to_func': False, 'coregister': True, 'coregister_reslice': False, 'coregister_software': 'spm', 'cv_tc': False, 'dartel': True, ...})
   1684     if n_jobs > 1:
   1685         preproc_subject_data = Parallel(n_jobs=n_jobs)(
   1686             delayed(do_subject_preproc)(
   1687                 subject_data,
   1688                 **preproc_params
-> 1689                 ) for subject_data in subjects)
    subject_data = {'anat': '/volatile/fabian/data/ds005/sub016/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}
    subjects = [{'anat': '/volatile/fabian/data/ds005/sub001/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub002/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub003/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub004/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub005/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub006/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub007/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub008/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub009/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub010/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub011/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub012/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub013/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub014/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub015/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}, {'anat': '/volatile/fabian/data/ds005/sub016/ana...sk001_run003/bold.nii.gz'], 'nipype_results': {}}]
   1690 
   1691     else:
   1692         preproc_subject_data = [do_subject_preproc(subject_data,
   1693                                                    **preproc_params)

...........................................................................
/home/fp985994/envs/p27/local/lib/python2.7/site-packages/joblib/parallel.py in __call__(self=Parallel(n_jobs=3), iterable=<generator object <genexpr>>)
    646             if pre_dispatch == "all" or n_jobs == 1:
    647                 # The iterable was consumed all at once by the above for loop.
    648                 # No need to wait for async callbacks to trigger to
    649                 # consumption.
    650                 self._iterating = False
--> 651             self.retrieve()
    self.retrieve = <bound method Parallel.retrieve of Parallel(n_jobs=3)>
    652             # Make sure that we get a last message telling us we are done
    653             elapsed_time = time.time() - self._start_time
    654             self._print('Done %3i out of %3i | elapsed: %s finished',
    655                         (len(self._output),

    ---------------------------------------------------------------------------
    Sub-process traceback:
    ---------------------------------------------------------------------------
    IndexError                                         Fri Jan 24 10:40:26 2014
PID: 8431                  Python 2.7.3: /home/fp985994/envs/p27/bin/python
...........................................................................
/home/fp985994/dev/pypreprocess/pypreprocess/nipype_preproc_spm_utils.pyc in do_subject_preproc(subject_data={'report_preproc_filename': '/volatile/fabian/da...reprocess_output/sub001/reports/report_log.html'}, deleteorient=False, slice_timing=True, slice_order='ascending', interleaved=False, refslice=1, TR=2.0, TA='TR * (1 - 1 / nslices)', slice_timing_software='spm', realign=True, realign_reslice=False, register_to_mean=True, realign_software='spm', coregister=True, coregister_reslice=False, coreg_anat_to_func=False, coregister_software='spm', segment=False, normalize=False, dartel=True, fwhm=[0.0, 0.0, 0.0], func_write_voxel_sizes=[3, 3, 3], anat_write_voxel_sizes=[1, 1, 1], hardlink_output=True, report=True, cv_tc=False, parent_results_gallery=None, last_stage=False, preproc_undergone=None, prepreproc_undergone='', generate_preproc_undergone=False, caching=True)
   1160         subject_data = _do_subject_slice_timing(
   1161             subject_data, TR, refslice=refslice,
   1162             TA=TA, slice_order=slice_order, interleaved=interleaved,
   1163             report=report,  # post-stc reporting bugs like hell!
   1164             software=slice_timing_software,
-> 1165             hardlink_output=hardlink_output
   1166             )
   1167 
   1168         # handle failed node
   1169         if subject_data.failed:

...........................................................................
/home/fp985994/dev/pypreprocess/pypreprocess/nipype_preproc_spm_utils.pyc in _do_subject_slice_timing(subject_data={'report_preproc_filename': '/volatile/fabian/da...reprocess_output/sub001/reports/report_log.html'}, TR=2.0, TA=1.9411764705882353, refslice=1, slice_order=array([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 1... 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33]), interleaved=False, caching=True, report=True, indexing='matlab', software='spm', hardlink_output=True)
    180         generate_stc_thumbnails(
    181             subject_data.func,
    182             stc_func,
    183             subject_data.reports_output_dir,
    184             sessions=subject_data.session_id,
--> 185             results_gallery=subject_data.results_gallery
    186             )
    187 
    188     subject_data.func = stc_func
    189 

...........................................................................
/home/fp985994/dev/pypreprocess/pypreprocess/reporting/preproc_reporter.pyc in generate_stc_thumbnails(original_bold=[array([[[[0, 0, 0, ..., 0, 0, 0],
     [0, 0...         [0, 0, 0, ..., 0, 0, 0]]]], dtype=int16), array([[[[ 0,  0,  0, ...,  0,  0,  0],
    ...   [12, 13, 10, ...,  7,  5,  7]]]], dtype=int16), array([[[[ 0,  0,  0, ...,  0,  0,  0],
    ...   [ 4,  0, 10, ..., 14,  6,  8]]]], dtype=int16)], st_corrected_bold=[array([[[[0, 0, 0, ..., 0, 0, 0],
     [0, 0...         [0, 0, 0, ..., 0, 0, 0]]]], dtype=int16), array([[[[ 0,  0,  0, ...,  0,  0,  0],
    ...   [12, 12, 13, ..., 22,  6,  5]]]], dtype=int16), array([[[[ 0,  0,  0, ...,  0,  0,  0],
    ...   [ 4,  4,  0, ..., 11, 14,  6]]]], dtype=int16)], output_dir='/volatile/fabian/data/ds005/pypreprocess_output/sub001/reports', voxel=array([ 32,  17, 120]), sessions=['1', '2', '3'], execution_log_html_filename=None, results_gallery=<pypreprocess.reporting.base_reporter.ResultsGallery object>, progress_logger=None)
    934 
    935         output_filename = os.path.join(output_dir,
    936                                        'stc_plot_%s.png' % session_id)
    937 
    938         pl.figure()
--> 939         pl.plot(o_bold[voxel[0], voxel[1], voxel[2], ...], 'o-')
    940         pl.hold('on')
    941         pl.plot(stc_bold[voxel[0], voxel[1], voxel[2], ...], 's-')
    942         pl.legend(('original BOLD', 'ST corrected BOLD'))
    943         pl.title("session %s: STC QA for voxel (%s, %s, %s)" % (

IndexError: invalid index
___________________________________________________________________________

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.