Giter VIP home page Giter VIP logo

hcppipelines's Introduction

HCP Pipelines

The HCP Pipelines product is a set of tools (primarily, but not exclusively, shell scripts) for processing MRI images for the Human Connectome Project. Among other things, these tools implement the Minimal Preprocessing Pipeline (MPP) described in Glasser et al. 2013

For further information, please see:

Discussion of HCP Pipeline usage and improvements can be posted to the hcp-users discussion list. Sign up for the hcp-users Google Group and click Sign In. For instructions on joining without a Google account: hcp-users-join-wiki

hcppipelines's People

Contributors

axkralj990 avatar brice82 avatar cjdonahue avatar coalsont avatar cyang31 avatar demsarjure avatar dicemt avatar gcburgess avatar glasserm avatar hfxcarl avatar hodgem avatar jese11 avatar joowon-kim avatar jschindl avatar junilc avatar kjamison avatar markj789 avatar mharms avatar michielcottaar avatar nagedem avatar nrgadmin avatar sivcek avatar ssothro avatar takuya-hayashi avatar tbbrown avatar ythackercs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hcppipelines's Issues

Freesurfer v6 (dev version) optimization of recon-all for HCP data

Dear Timothy and other HCP experts,

I am currently in the process of setup and testing HCP pipelines with recent FreeSurfer dev version on our structural data acquired according to the parameters of HCP protocol. I was testing your FreeSurferPipeline.sh in https://github.com/Washington-University/Pipelines/tree/freesurfer_v6 branch
which calls recon-all.v6.hires script
https://github.com/freesurfer/freesurfer/blob/d26114a201333f812d2cef67a338e2685c004d00/scripts/recon-all.v6.hires
in 10 subjects against my optimized recon-all code and would like to share with you following observations:

  1. You use resampled data to 1mm3 for tesselation. The reason is, as stated in the script, "If done in hires space, then there are lots of defects, and the topofixer sometimes cuts off a gyrus". However, experiences with my data are opposite. When I compared tesselation at 1mm3 and hires space, I saw that there are much more errors with data tesselated at 1mm3. With tess1mm in most subjects I found a spot where pial/white surface did not extent sufficienly outwards to cover all white/gray matter, in some subjects large portion of gyrus was cut off. The cut ouf of the gyrus in hires space occured in one spot in 2 subjects of 10, however in this case the error was present also in 1mm3 tesselation.
    As an expamle, see several screenshots where 1mm3 tesselation cuts out a gyrus whereas hires tesselation is ok.
    hires tesselation: pial - yellow, white - red
    1mm tesselation: pial - green, white - light blue

tess1mm_issue_subj1
tess1mm_issue_subj2
tess1mm_issue_subj3
tess1mm_issue_subj4

So, according to my testing, hires tesselation is better. Could you please comment on?

  1. I presume that you use v6 version of FreeSurfer. There was substantial update of FLAIRpial/T2pial code in dev version. -nsigma_above, -nsigma_below parameters are considered obsolete, there are new parameters for T2/FLAIRpial ( -T2_min_outside, -T2_max_outside, -T2_min_inside, -T2_min_outside, -wm_weight and several others).
    See following commits:
    freesurfer/freesurfer@1fe83cd
    freesurfer/freesurfer@102e053
    freesurfer/freesurfer@b4b47c8

Do you plan to update recon-all code to reflect these updates?

  1. I think that mris_inflate with hires data has to have increased number of iterations (as also suggested here: https://surfer.nmr.mgh.harvard.edu/fswiki/SubmillimeterRecon ). I use -n 50.

  2. you disabled mri_normalize for T2 (using if (0)) stating that it does not work well. Do you mean the errors whith artificially high intensities in T2 after normalization? There were some bugfixes in mri_normalize in dev version addressing that
    (mainly following commit: freesurfer/freesurfer@adb9cfd )
    which maybe solves your issues?

There are another 3 points I encountered with processing of non-hires data but maybe also relevant here:

  1. You use cubic interpolation for FLAIR/T2. In some other data I saw that cubic interpolation also for T1 leads to lesser smoothed T1 data and slight local improvement of gm/wm contrast. The cubic interpolation was default in patched version of 5.3, however, in v6.0, from reasons not clear to me, cubic is again switched off. With cubic interpolation off I had issues with local errors in white surface placement (white was extended too far out with artificially low thickness estimates) and these errors do not respond to editing of wm.mgz, so they were not correctable (apart from directly editing surface coordinates which is tedious and practically not usable). When I switched to cubic interpolation of T1, I got better response with surface placement on wm.mgz editing. The downside of that is that gm/wm interface is somewhat more noised and leads to higher amount of small errors which have to be corrected by more extensive wm.mgz editing.
    We partially discussed this with Bruce and Doug in following thread:
    https://www.mail-archive.com/[email protected]/msg52822.html
    Did you do some testing with this option on HCP data? Do you have any experience with this?

  2. In some subjects in the midline where cortex of left and right hemisphere touches, there was overlap of left and right pial surface. This overlap can be prevented using aseg.presurf.mgz in mris_make_surfaces in T2pial, but when aseg is incorrect in the midline (i.e. incorrectly assign voxel of left cortex to right cortex - which is often the case), this leads to artificial cut out portion of the gray matter. I solved this by running recon-all in 2 iterations: Firstly running standard recon-all (which used aseg.presurf.mgz in T2pial) and then rerun recon-all -autorecon2-noaseg -autorecon3 -T2pial but this time using aseg.mgz instead (which is generated by current surfaces almost in the end of the recon-all). This setup solved satisfactorily my issue. What do you think of this? Did you encounter similar issue with overlapping pial surfaces when testing recon-all on HCP data?

  3. In some cases skullstrip is over-aggressive - cuts out small portions of gray matter. I found that the local issues with pial surface caused by this can be elegantly solved by following: Instead to use T2 masked by brainmask.mgz (or brain.finalsurfs.mgz, as is recommended in the recent dev version), I dilate the mask for T2 using mri_binarize --dilate 2 to sligtly extend T2 image. T2pial then can correct for local small pial errors caused by local brainmask issue.

I am still in the process of optimizing things, so any comments on my proposals of code modification and your own view on the topic / open issues with recon-all -hires on HCP data will be very much appreciated.

Regards,

Antonin

Error: Spin echo fieldmap has different dimensions than scout image, this requires a manual fix

HI,
Still a newbie on the use of HCP pipelines, please excuse any trivial question/comment
I'm currently attempting to run a fMRIVolume pipeline on an fMRI dataset using spin echo field maps (AP - PA). This pipeline worked fine the very first time I ran it, but now it stops running when calling for the TopupPreprocessingAll.sh, with the following error:

Error: Spin echo fieldmap has different dimensions than scout image, this requires a manual fix.

A more complete version of the report is:

Wed Apr 10 15:44:04 EDT 2019 - MotionCorrection.sh: Change names of all matrices in OutputMotionMatrixFolder
Wed Apr 10 15:44:07 EDT 2019 - MotionCorrection.sh: Run the Derive function to generate appropriate regressors from the par file
Wed Apr 10 15:44:30 EDT 2019 - MotionCorrection.sh: END
Wed Apr 10 15:44:30 EDT 2019 - GenericfMRIVolumeProcessingPipeline.sh: EPI Distortion Correction and EPI to T1w Registration
Wed Apr 10 15:44:30 EDT 2019 - GenericfMRIVolumeProcessingPipeline.sh: mkdir -p /Users/nib19005/Desktop/2019/TMS/BIDS/TMS_BIDS3/Bids/01/MotorLoc_AP/DistortionCorrectionAndEPIToT1wReg_FLIRTBBRAndFreeSurferBBRbased
Wed Apr 10 15:44:32 EDT 2019 - DistortionCorrectionAndEPIToT1wReg_FLIRTBBRAndFreeSurferBBRbased.sh: START
Wed Apr 10 15:44:32 EDT 2019 - TopupPreprocessingAll.sh: START: Topup Field Map Generation and Gradient Unwarping
Wed Apr 10 15:44:32 EDT 2019 - TopupPreprocessingAll.sh: Error: Spin echo fieldmap has different dimensions than scout image, this requires a manual fix

Been looking up for similar issues in the past but couldn't find any. I have run the PreFreeSurfer, FreeSurfer and PostFresurfer pipelines before on the structural images.

Been thinking about changing the field map size myself, but still wonder why things worked in the past and not anymore. I'm willing to share whatever bit of code/data might be helpful to figure out the issue, but I prefer to wait until you tell me what you need exactly.

Thanks!
Nicolas

gradient coefficient file

The email provided for getting the gradient coefficient file for gradient correction is not valid. can you provide an alternative?

advice on where to incorporate freesurfer manual fixes

I am going through subjects' freesurfer results and doing manual fixes like control points since its doing a poor job in the temporal pole area.

I added a specific controlPoints processing script in place of "FreeSurfer/FreeSurferPipeline.sh" that picks up with "-autorecon2-cp" and runs all the same steps afterwards.

recon-all -subjid $SubjectID -sd $SubjectDIR -autorecon2-cp -nosmooth2 -noinflate2 -nocurvstats -nosegstats -openmp ${num_cores} ${seed_cmd_appendix}

Any suggestions where else i will need to make changes to have these changes be effective for the rest of the Freesurfer/PostFreesurfer pipeline?

I can see that my control.dat was picked up in the normal processing stream, but it didn't seem to have much affect on anything after that.

This is after running from -autorecon2-cp on:
image

Error: Forbidden when trying to get example data

When I try to download the example data using either of the links in the "Getting Example Data" section of "v3.4.0 Release Notes, Installation, and Usage" in the README, I get a message that says, "Error: Forbidden." I am registered with db.humanconnectome.org (though I do not have and do not intend to get the gradient coefficients file).

IcaFixProcessingBatch.sh typo?

For consistency, line 126 of Pipelines/Examples/Scripts/IcaFixProcessingBatch.sh should probably be either:

FixScript=${HCPPIPEDIR}/ICAFIX/hcp_fix
or
FixScript=${FSL_FIXDIR}/hcp_fix

negative unwarpdir inconsistensies: y- vs. -y

Hi,

I'm running into a problem when running the PreFreeSurferPipeline.sh when the unwarp direction is negative on the second axis (y- / -y):

T2ToT1wDistortionCorrectAndReg.sh calls FSL's convertwarp (and others) requiring the negative direction to be specified as --unwarpdir=y-.
But earlier TopupPreprocessingAll.sh is called and documentation indicates I should pass unwarpdir=-y.
I've specified unwarpdir=-y, and I'm running into an error on line 270 of T2ToT1wDistortionCorrectAndReg.sh I've noticed that most scripts accept both y- and -y. Apparently -y doesn't pass the FSL commands, but I wonder if y- would pass everywhere (also in the other pipelines)...

This is the exact error:

Wed Nov 30 05:19:12 UTC 2016 - TopupPreprocessingAll.sh - END: Topup Field Map Generation and Gradient Unwarping

Cannot interpret shift direction = -y
terminate called after throwing an instance of 'NEWMAT::IndexException'
/opt/HCP-Pipelines/PreFreeSurfer/scripts/T2wToT1wDistortionCorrectAndReg.sh: line 252: 24237 Aborted                 ${FSLDIR}/bin/convertwarp --relout --rel --ref=${WD}/Magn
itude --shiftmap=${WD}/FieldMap_ShiftMap${TXw}.nii.gz --shiftdir=${UnwarpDir} --out=${WD}/FieldMap_Warp${TXw}.nii.gz

Thanks,

Joke

eddy non-gpu option

There isn't a way to run the diffusion pipeline without a GPU currently. DiffusionPreprocessing/scripts/run_eddy.sh has the capability to run eddy_openmp instead of eddy_cuda, but DiffusionPreprocessing/DiffPreprocPipeline_Eddy.sh is currently hard-coded to pass run_eddy.sh the "-g" flag. Here's a patch that adds an optional "--eddy-non-gpu" argument to DiffPreprocPipeline.sh, passes it along to DiffPreprocPipeline_Eddy.sh, which then skips the "-g" argument when calling run_eddy.sh.

I haven't thoroughly vetted it, so this is more of a suggestion/consideration.

eddy_non_gpu.patch.zip

Variable naming ambiguity in TopupPreprocessingAll.sh

if [ ! -z ${DistortionCorrectionFieldOutput} ] ; then
${FSLDIR}/bin/imcp ${WD}/TopupField ${DistortionCorrectionFieldOutput}
fi

The Pre-FreeSurfer processing pipeline sets DistortionCorrectionFieldOutput=${StudyFolder}/${Subject}/T2w/T2wToT1wDistortionCorrectAndReg/FieldMap when TopupPreprocessingAll.sh runs. That's meant to be a file name. However, the script's working directory is set to the same value.

Commit 23fe15f reveals the issue--the above imcp command fails because it tries to copy T2wToT1wDistortionCorrectAndReg/FieldMap/TopupField.nii.gz to itself (rather than to T2wToT1wDistortionCorrectAndReg/FieldMap.nii.gz).

The arguably "correct" fix requires a few changes, but simply reverting the second parameter back to ${DistortionCorrectionFieldOutput}.nii.gz is enough to get the pipeline running to completion again.

typo in TaskfMRILevel2.v2.0.sh

l.21:
LevelOnefsfNames=echo $LevelOnefMRINames | sed 's/@/ /g'
should be replaced with:
LevelOnefsfNames=echo $LevelOnefsfNames | sed 's/@/ /g'

smoothing for grayordinates

Hello,
I notice that grayordinates includes both the surface and subcortical voxels.
Now I got some results on grayordinates, and I want to spatially smooth the values using Gaussian kernels.
Do you have any code and example to do such smoothing on surface vertices?
Thanks.

Changed behavior of applywarp, flirt, and bet in FSL 6.0+

FSL 6.0+ apparently expects that the -r (reference) volume will be a single 3D volume in any call to applywarp within FSL 6.0+.
https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind1811&L=FSL&D=0&P=74472
To me this change in behavior should be considered a bug, but based on the list-serve response above, FSL doesn't appear to consider it to be a bug. So, we need to review all our applywarp calls across the entire code base, and ensure that the -r volume is a single 3D volume in all cases.

PostFixBatch.sh

Two argument names for PostFix.sh are wrong.
[Line 106]
wrong: --fmri-names
right: --fmri-name
[Line 107]
wrong: --highpass
right: --high-pass

Also, I think that "NO" is input to ReUseHighPass (line 83) as default, since ${fMRIName}_Atlas_hp${HighPass}.dtseries.nii does not seem to be created after running IcaFixBatch.sh.

FreeSurferNHP.sh

Hi! My name is Pam. I'm working with mri of rhesus monkey, I want to do cortical thickness analysis. The scrip DistortionCorrectionAndEPIToT1wReg_FLIRTBBRAndFreeSurferBBRbased.sh calls FreeSurferNHP.sh which would really help me for my analysis. Somebody knows how can I get or download it? I really appreciate their help!

Thanks!!

Fix and move creation of VA maps

Fix creation of VA maps

  • In MSMAll.sh the code which creates the midthickness vertex area (VA) map for both hemispheres is misnaming the resulting VA map file with an R in the name indicating that it is for the Right hemisphere when it is for both hemispheres.
  • This code currently starts at line 608 and runs through line 615.
  • Once this is fixed, other uses of the incorrect name for the midthickness VA file need to be updated in the MSMAll.sh script.

Move creation of VA maps to PostFreeSurfer phase of Structural Preprocessing

  • After this is fixed in the MSMAll.sh script, then this code needs to also be included at the end of the FreeSurfer2CaretConvertAndRegisterNonLinear.sh script so that these maps are generated (for each LowResMesh specified) during the PostFreeSurfer phase of Structural Preprocessing.

Create VA maps in MSMAll.sh conditionally

  • Once the VA map creation is copied/ported into the FreeSurfer2CaretConvertAndRegisterNonLinear.sh script, then it needs to be made a conditional creation in the MSMAll.sh script so that the VA map file is only generated if it does not already exist.
  • That way if the MSMAll.sh script is run on data that has been processed by the changed version of the FreeSurfer2CaretConvertAndRegisterNonLinear.sh script, it will not re-generated the VA map. But if MSMAll.sh is run on data that was processed by a previous version of the FreeSurfer2CaretConvertAndRegisterNonLinear.sh script, it will generate the necessary VA map.

Update "Release Notes, Installation, and Usage"

It would be really handy if we had a more generic (and updated) "Release Notes" page. The current one, crafted around release 3.4, causes confusion about which versions of software need to be used (regarding FSL and Workbench). (And the orientation around the S500 release is outdated).

Test Issue

This is a test issue assigned to tbbrown to check on email notification of issues.

Prefreesurfer "unwarpdir" parameter

The unwarpdir parameter is described as the "readout direction" of the anatomical data (T1, T2) in the documentation for Prefreesurfer. However, this parameter is ultimately passed into fsl fugue which describes unwarp direction as the phase encoding direction (for epi data). Is there a difference between the acquisitions which would merit the perpendicular choice of unwarpdir? Or is this possibly a mistake in documentation?

TaskfMRILevel2.v?.0.sh - Contrasts with whitespaces in titles

I had problems with NumContrasts being wrongly set in TaskfMRILevel2.v?.sh. The problem was that I had spaces in my contrast names. The easy solution is to replace "wc -w" with "wc -l" in the calculation in those scripts.

Thanks -

Richard Watts, University of Vermont

PostFix.sh

I find the portion of the PostFix.sh code totally unintuitive:

#if [ -e ${ComponentList} ] ; then
#	log_Msg "Removing ComponentList: ${ComponentList}"
#	rm ${ComponentList}
#fi
  1. Fix does not even generate this file, thus far as I can tell I have to generate it
  2. Of course I do not want to edit PostFix.sh directly so I generate the ComponentList, Signal, and Noise text files in a separate PostFix control script
  3. Unfortunately there is not way around the rm ${ComponentList}

What am I missing here?

gradient nonlinear correction for non-Siemens scanners?

Currently the GradientDistortionUnwarp.sh script has "siemens" hardcoded as the "vendor" input to gradient_unwarp.py. It's straightforward enough to add an optional input to this script, and propagate that field up to other scripts, but has anyone tested whether this is all that's needed for GE scanners?

Add 'zstat' as part of task-fMRI file names

@glasserm @gcburgess What do you think about us adding 'zstat' explicitly as part of the summary (merged) file names that result from the task pipeline, so that we no longer have this confusing historical oddity where the 'beta' maps are indicated as such via the file name, but the 'zstat' maps are not. Should be a relatively simple code addition, but would require a willingness to adapt downstream scripts (outside of the HCP Pipelines) that may be looking for the "zstat" outputs.

Eddy Preprocessing

Hi,

Can someone confirm that the eddy preprocessing step will generate the
following combined B0 volume? It's hard to know for sure just by following
the code.

B0 . . B0 . . . . B0  # a positive encoded dwi
b0 . . .  # a negative encoded dwi
=>
B0 B0 b0 # the combined b0's 

Thanks,
Ryan

Regarding PreFreeSurferPipeline

Hi...

I am new in this field. Trying to run the PreFreeSurferPipeline.sh but encountered the following error:

Images: ./mystudy/102109/T1w/T1w1_gdc ./mystudy/102109/T1w/T1w2_gdc Output: ./mystudy/102109/T1w/T1w

ERROR: Could not find image ./mystudy/102109/T1w/AverageT1wImages/T1w1_gdc.nii.gz

I found that the specified file (T1w1_gdc.nii.gz) is generated under T1w folder not under AverageT1wImages.

Kindly help me in this regard.

Thanks and Regards,

Sayan

Pos=AP? or PA?

In DiffusionPreprocessingBatch.7T.sh,
PosData="xxxx_AP.nii.gz"
NegData="xxx_PA.nii.gz"
whereas in DiffPreprocPipeline_PreEddy.sh
basePos="PA"
baseNeg="AP"
I assume this does not make any difference for the result, but I want to make sure.
Also, the content of the files created in the Diffusion/rawdata directory does not match those in the unprocessed/Diffusion directory.

Couldn't find T1w_hires.nii.gz with freesurfer 6

Hi,

I'm running HCP pipelines on a dataset with freesurfer 6 and I'm running into an issue at this line:

Mon Mar  5 06:21:10 UTC 2018 - FreeSurferPipeline.sh: cmd: fslmaths 
/oak/stanford/groups/russpold/data/Psychosis/0.0.3//derivatives/HCPPipelines/sub-S9905QEN/T1w/sub-S9905QEN/mri/T1w_hires.nii.gz -mul 
/oak/stanford/groups/russpold/data/Psychosis/0.0.3//derivatives/HCPPipelines/sub-S9905QEN/T1w/sub-S9905QEN/mri/T2w_hires.nii.gz -sqrt 
/oak/stanford/groups/russpold/data/Psychosis/0.0.3//derivatives/HCPPipelines/sub-S9905QEN/T1w/sub-S9905QEN/mri/T1wMulT2w_hires.nii.gz

I'm running the freesurfer_v6-branch.

Freesurfer had run succesfully, using the most recent recon-all.v6.hires.

I've attached my freesurfer logs, as well as the pipeline logs.
I tried to debug myself, but I can't find where in freesurfer the T?w_hires.nii.gz should be created.

Any thoughts?
HCP pipelines log
recon-all.cmd
recon-all-status.log
recon-all.log

hcp_fix doesn't apply any hp filter to CIFTI if do_motion_regression=FALSE

Restarted from #107.

Background: In the v4.0.0 HCPpipelines version of ICAFIX/hcp_fix, we added the capability to control whether the motion parameters were regressed out of the data as part of the fix-cleaning (via the do_motion_regression argument). [Prior to that (e.g., v3.27.0 and earlier), regression of the motion parameters was always applied].

FIX was originally written for volume (NIFTI) data, with the assumption that any high-pass filtering was applied prior to FIX itself. Thus, FIX originally only needed to worry about applying the same hp filter to the motion regressors, and thus in the fix script, the -h argument (to specify the filter) was made a subargument to the -m flag (that instructs fix to apply motion regression).

When the CIFTI format came along, the hp filtering of the CIFTI dtseries data was implemented in the context of fix_3_clean (called by the fix script to implement the actual cleaning of the data). But the -h argument in fix stayed a subargument to the -m flag.

Problem: While fix_3_clean has the capability to independently apply motion regression and an hp filter to the CIFTI dtseries, the fix script itself does not support that -- unless one requests motion regression, there is no mechanism in fix itself for filtering the dtseries.

Consequently, if running the v4.0.0 version of hcp_fix with something like the following
hcp_fix <data> 2000 FALSE
the volume (NIFTI) data will be filtered by the requested hp=2000 filter (since that is done in hcp_fix script itself), but the CIFTI dtseries data will not get filtered at all (due to the setting of FALSE for the do_motion_regression argument). Insidiously in this situation, a cleaned CIFTI file will still be generated, and the file name will in fact be labeled hp2000_clean.dtseries.nii, even though an hp=2000 filter wasn't actually applied to it!!

Suggested solution: Split -m and -h into independent arguments in the fix script. (Ideally, in the FIX distribution itself).

usage statement

There currently is no usage function in MakeAverageDataset.sh, so L107 needs to be commented out currently.

Issue about new FreeSurferPipeline

Dear experts,

Environment: HCP Pipeline v4.0.0, FreeSurfer v6.0.0, Ubuntu 14.04

I recently used the FreeSurfer pipeline to preprocess my data while an error came out.
FreeSueferPipeline.sh: ABORTING: unrecognized option: --printcom=

But when I have switched the pipeline utility to FreeSurfer-5.3.0, it worked without this error.

What should I do to fix this issue?

PostFix/README.md is confusing

PostFix/README.md should be amended to more clearly distinguish between running with the already compiled (and provided) version of prepareICAs.m (compiled under R2016b) vs. re-compiling under a different version of Matlab.

Orientation to purpose of the different 'sub'-pipelines

It would be helpful to users if we included a general orientation to the purpose of all the different 'sub'-pipelines (i.e., sub-directories in Pipelines directory), either as part of the top-level README, or as part of the FAQ (or updated Release Notes). e.g., something similar to https://wiki.humanconnectome.org/display/PublicData/HCP+Users+FAQ#HCPUsersFAQ-10.Whatistheorderofpipelinesforresting-statedata?
but with more detail and guidance.

Renamed issue: Subject 204218 resting state detrending failed

Please use issue #108 for the original topic of hcp_fix without motion regression, and fix argument parsing.

Background: In the v4.0.0 HCPpipelines version of ICAFIX/hcp_fix, we added the capability to control whether the motion parameters were regressed out of the data as part of the fix-cleaning (via the do_motion_regression argument). [Prior to that (e.g., v3.27.0 and earlier), regression of the motion parameters was always applied].

FIX was originally written for volume (NIFTI) data, with the assumption that any high-pass filtering was applied prior to FIX itself. Thus, FIX originally only needed to worry about applying the same hp filter to the motion regressors, and thus in the fix script, the -h argument (to specify the filter) was made a subargument to the -m flag (that instructs fix to apply motion regression).

When the CIFTI format came along, the hp filtering of the CIFTI dtseries data was implemented in the context of fix_3_clean (called by the fix script to implement the actual cleaning of the data). But the -h argument in fix stayed a subargument to the -m flag.

Problem: While fix_3_clean has the capability to independently apply motion regression and an hp filter to the CIFTI dtseries, the fix script itself does not support that -- unless one requests motion regression, there is no mechanism in fix itself for filtering the dtseries.

Consequently, if running the v4.0.0 version of hcp_fix with something like the following
hcp_fix <data> 2000 FALSE
the volume (NIFTI) data will be filtered by the requested hp=2000 filter (since that is done in hcp_fix script itself), but the CIFTI dtseries data will not get filtered at all (due to the setting of FALSE for the do_motion_regression argument). Insidiously in this situation, a cleaned CIFTI file will still be generated, and the file name will in fact be labeled hp2000_clean.dtseries.nii, even though an hp=2000 filter wasn't actually applied to it!!

Suggested solution: Split -m and -h into independent arguments in the fix script. (Ideally, in the FIX distribution itself).

Mac OSX cp --preserve=timestamps error

In FreeSurfer/Pipelines/scripts/ FreeSurferHiresPial and FreeSurferHiresWhite:
cp --preserve=timestamps $SubjectDIR/$SubjectID/surf/lh.pial ...

This causes an error on Mac OSX El Capitan (and earlier versions, I would strongly suspect). It looks like this was changed in March, and broke Mac compatibility. It seems as if the option --preserve is only available using Gnu CoreUtils. I'm not sure what the problem was with the previous -p option.

This caused a lot of frustration and head scratching!

Thanks -

Richard

MagnitudeInputName="NONE" causes Error

PreFreeSurferPipelineBatch.sh contains the following comment, just before MagnitudeInputName is defined:

# The MagnitudeInputName variable should be set to a 4D magitude volume 
# with two 3D timepoints or "NONE" if not used

However, when MagnitudeInputName is set to "NONE", the script raises the following error:

Cannot open volume NONE for reading!

The error occurs on this line of SiemensFieldMapPreprocessingAll.sh:

${FSLDIR}/bin/fslmaths ${MagnitudeInputName} -Tmean ${WD}/Magnitude

My question is, if I don't want to use MagnitudeInputName, which lines of SiemensFieldMapPreprocessingAll.sh should I exclude (in addition to the one above)? Are there lines in other scripts called by PreFreeSurferPipelineBatch.sh that I should exclude?

Audit of "intermediate files" prior to HCP-D/A processing

Prior to running the Pipelines in the HCP-D/A data, it would be good if we could do an "audit" of the full set of pipeline outputs, with a goal of moving away from the "packaging" of files that we want to "keep". i.e., As we move to the data living on the cloud, we should plan on simply saving all the files in given, specified output directories. If there are some files in those directories (e.g., the entire MNINonLinear folder) that we really don't want included from certain pipelines, they should probably be deleted as part of the Pipeline script itself.

Relatedly, since we haven't completely dismissed the possibility of saving at least some "intermediates" in the cloud, it would be beneficial to review the intermediates with an eye toward what could be eliminated to reduce storage needs considerably, while keeping any intermediates that might be particularly hard to regenerate, or which might be particularly useful for debugging purposes.

fMRIVolume OneStepResampling.sh is breaking the TR if input data is not in seconds

When data is passed to OneStepResampling.sh that has TR in ms it is breaking the files afterwards because the time_units are not being accounted for.

You're saving the TR, then re-setting in later with fslmerge, but fslmerge requires TR in seconds.

We found that our files after this step had TR set to 1000.00 seconds, instead of 1 sc, or 1000.00 ms

I added some code to account for different units and adjust before it got to the fslmerge step

##Save TR for later
TR_vol=`${FSLDIR}/bin/fslval ${InputfMRI} pixdim4 | cut -d " " -f 1`
TR_units=`${FSLDIR}/bin/fslval ${InputfMRI} time_units | cut -d " " -f 1`
NumFrames=`${FSLDIR}/bin/fslval ${InputfMRI} dim4`

##added to account for time units
if [ ${TR_units} = "s" ] ; then
        # this is fine
        continue
elif [ ${TR_units} = "ms" ] ; then 
        TR_vol=$( perl -e "print $TR_vol / 1000.0");
elif [ ${TR_units} = "us" ] ; then
        TR_vol=$( perl -e "print $TR_vol / 1000000.0");
fi
##Merge together results and restore the TR (saved beforehand)
${FSLDIR}/bin/fslmerge -tr ${OutputfMRI} $FrameMergeSTRING $TR_vol
${FSLDIR}/bin/fslmerge -tr ${OutputfMRI}_mask $FrameMergeSTRINGII $TR_vol

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.