Giter VIP home page Giter VIP logo

bidskit's People

Contributors

alexenge avatar alexsayal avatar argyelan avatar celstark avatar danweflen avatar dmd avatar jcrdubois avatar jmtyszka avatar johncholewa avatar lixiangxu avatar nair-r avatar pn2200 avatar rhancockn avatar vnckppl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bidskit's Issues

bidskit 1.2.3 not completing 1st pass file conversions

When I run bidskit 1.2.3 it creates the directories but does not complete the DICOM to Nifti conversion. There are no warnings or errors given. It creates the work, derivatives, and sourcedata folders, but they are all empty. The code folder contains a Protocol_Translator.json, but the file is empty. Any ideas why this might be the case?

Terminal output:


BIDSKIT 1.2.3

dcm2niix version v1.0.20190720 detected
Initializing BIDS dataset directory tree in /Volumes/My_Passport/test
Creating required file templates
Creating new dataset_description.json
Creating new participants.json

Source data directory : /Volumes/My_Passport/test/sourcedata
Working Directory : /Volumes/My_Passport/test/work
Use Session Directories : Yes
Overwrite Existing Files : No


Pass 1 : DICOM to Nifti conversion and translator creation


New protocol dictionary created : /Volumes/My_Passport/test/code/Protocol_Translator.json
Remember to replace "EXCLUDE" values in dictionary with an appropriate image description
For example "MP-RAGE T1w 3D structural" or "MB-EPI BOLD resting-state

Subject directories to prune:

IntendedFor field with multiple targets

Hi Mike,

I've found a bug; I've done some sleuthing and can point to the general direction of the problem but can't say exactly the cause.

The symptom: When the IntendedFor field in the JSON sidecar to a field map has multiple values (i.e., multiple target functional runs), it always puts ses-01 as the prefix to the functional runs -- even when it actually belongs to a later session (e.g., ses-02, ses-03, etc.) This bug does not happen when there is only one target value in the IntendedFor field.

What I've figured out:

I think the issues starts on line 317 or earlier in the code. In line 317 we have:
bids_purpose, bids_suffix, bids_intendedfor = prot_dict[ser_desc]

When there is only one IntendedFor target value, the bids_intendedfor value returned there will look correct-looking, e.g., task-taskname_bold.

When there are multiple IntendedFor target values, the bids_intendedfor value returned on line 317 is already "built out", e.g., I get ['ses-1/func/sub-M80305800_ses-1_task-sponpain_bold.nii.gz', 'ses-1/func/sub-M80305800_ses-1_task-acute_bold.nii.gz']. This is a problem because those ses-1's seem to be hardcoded or such; they don't accurately reflect the session number. And then, because these values already have the .nii.gz ending, they are not sent to bids_build_intendedfor on line 345/346

Thanks!
Yoni

run numbering when several SerDesc have the same BIDS_Name in Protocol_Translator

I have the same BIDS_Name for 2+ SerDesc (SE-EPI fieldmaps) in my Protocol_Translator (due to poor naming scheme at the console, granted!).

"SE_EPI_Fieldmap_Pos":[
"fmap",
"dir-AP_epi",
"UNASSIGNED"
],
"acq-casinoE_run-01_dir-AP_epi":[
"fmap",
"dir-AP_epi",
"UNASSIGNED"
],

The run numbering fails in this situation, simply acknowledging that the files exists already and skipping, with this message:

Populating BIDS source directory
Preserving previous sub-p60cs_ses-caltech_dir-AP_run-01_epi.nii.gz
Preserving previous sub-p60cs_ses-caltech_dir-AP_run-01_epi.json

It should be possible to modify auto_run_no (in translate.py) so it recognizes this situation. I can take a crack at it

Problem adding new participant

I've got data in raw_data/s2/Day1/*IMA and I run ./dcm2bids.py -i ../raw_data_SS -o ../BIDS_SS, edit my ProtocolTemplate.json, run it again, and everything works just fine.

I now copy into raw_data/s1/Day1/*IMA and run again, but it's not doing what I expect. I hit Pass 2 only (expected) and for the new participant, it just says "Processing session Day1" and "Preserving conversion directory". It goes on to organize the one it already did.

I've replicated this several times now. I let it process s1 and s2 at once, organize all just fine, and then added in s3. It skips over s3 with just the "Preserving conversion directory" and re-organizes s1 and s2.

Am I missing something in how to add in new subjects?

Add support for single-session data

Currently, the code mandates that there be a session directory on the input - subject/session/raw_images - and writes out a directory structure and set of filenames that have the ses-<session_label> format. That's great if you have multiple sessions, but if you only have one session, it adds complexity and implies >1 session may exist. BIDS has this be an optional component rather than a mandatory one, so we should set things up with that in mind.

One solution is to force placement of the DICOMs in the input folder into a "Session1" type directory, but to determine if there are >1 session folders. If not, place the BIDS output without the session info. The trouble here is if we're running multiple times, we may run bidskit first with "Session1" in place and then with "Session2" in place and we'd have lost the session info in the process.

A second solution is to say "sorry, you need to include session info". Not keen on that as noted (unless I'm reading the BIDS spec wrong!).

A third solution is to take the current "loop over session directories" structure and rework it to flag the single vs. multi-session and handle accordingly.

Thoughts?

bvec and bval files

Hi, when I run the script with diffusion data, the bval and bvec files are not automatically generated, but if I just run the dcm2niix alone, it perfectly creates the files.

Fieldmap IntendedFor redux

IntendedFor still needs two things, from what I can see:

  1. Spitting out the full directory
  2. Handling multiple runs smoothly

For example, my Protocol_Translator.json has a pair of sections like this, that I think are written as intended:
"BOLD_PACal":[ "fmap", "dir-PA_epi", "task-baselines_bold" ],
This leads to:
"IntendedFor":"sub-121001_ses-Session1_task-baselines_bold.nii.gz"

In truth, given the multiple runs we had with the same name, it should read:
"IntendedFor":["ses-Session1/func/sub-121001_ses-Session1_task-baselines_run-01_bold.nii.gz", "ses-Session1/func/sub-121001_ses-Session1_task-baselines_run-02_bold.nii.gz", "ses-Session1/func/sub-121001_ses-Session1_task-baselines_run-03_bold.nii.gz", "ses-Session1/func/sub-121001_ses-Session1_task-baselines_run-04_bold.nii.gz" ]
So, we should both prepend the ses-session/func/ bit and we should append whatever runs we came up with.

UnicodeDecodeError: ascii codec can't decode

I have been running the bidskit previously with the docker image and need to add in new subjects. I continue to get the following error. Any advice is appreciated.

Laurens-MBP:encode lbreithauptlangston$ docker run -it -v /Users/lbreithauptlangston/Desktop/encode/:/mnt rnair07/bidskit --indir=/mnt/dicom --outdir=/mnt/source


DICOM to BIDS Converter

Software Version : 1.1.1
DICOM Root Directory : /mnt/dicom
BIDS Source Directory : /mnt/source
BIDS Derivatives Directory : /mnt/derivatives/conversion
Working Directory : /mnt/work/conversion
Use Session Directories : Yes
Overwrite Existing Files : No
Traceback (most recent call last):
File "/app/dcm2bids.py", line 944, in
main()
File "/app/dcm2bids.py", line 139, in main
prot_dict = bids_load_prot_dict(prot_dict_json)
File "/app/dcm2bids.py", line 728, in bids_load_prot_dict
prot_dict = json.load(json_fd)
File "/usr/lib/python3.4/json/init.py", line 265, in load
return loads(fp.read(),
File "/usr/lib/python3.4/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 6: ordinal not in range(128)

Move conv folders to a work folder

Keeping the temporary conversion folders out of the final BIDS source tree avoids validation errors. Ultimate goal is to handle dependencies in the conversion in the same way as make to handle the addition of new subjects and sessions to the dicom directory.

KeyError

After upgrading to the latest Bidskit, I believe I messed something up. I've attempted reinstallation via several different methods but keep getting an error when running it, similar to the following:

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/bin/bidskit", line 11, in
load_entry_point('bidskit==1.2.3', 'console_scripts', 'bidskit')()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/bidskit/launcher.py", line 207, in main
args.clean_conv_dir, overwrite)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/bidskit/organize.py", line 68, in organize_series
run_no = btr.auto_run_no(nii_list, prot_dict)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/bidskit/translate.py", line 360, in auto_run_no
_, bids_suffix, _ = prot_dict[info['SerDesc']]
KeyError: '3-PLANE_LOC'

It doesn't seem to recognize the naming convention of the NIFTI files. If I delete those dicoms, it throws an error for a different file. Any thoughts/help would be greatly appreciated! Thank you.

all protocols being exluded

I just started to try to use bidskit and am running into some problems. The first pass runs fine with all of the data being properly converted. I modified my Protocol_Translator.json, and tried to run the second pass, but all of the protocols are excluded. Here's part of my Protocol_Translator.json file:
{
"localizer":[
"EXCLUDE_BIDS_Directory",
"EXCLUDE_BIDS_Name"
],
"t1_mpr_ns_sag_p2_iso":[
"anat",
"T1w"
],
"t1_mpr_tra_iso":[
"anat",
"T1w"
],
"ep2d_bold_moco":[
"func",
"task-rest_bold"
],
"fMRI_Task_Learn_4":[
"func",
"task-learn_run-4_bold"

And this is what I'm getting when I run the second pass:
psyc311:bids_test orruser$ python ~/bidskit/dcm2bids.py -i ~/bids_test/DICOM/ -o ~/bids_test/BIDS


Pass 2 : Organizing Nifti data into BIDS directories

Processing subject CBLM_LEARN_101_101-CBLM
Processing session 101_scan

  • Excluding protocol ep2d_bold_moco
  • Excluding protocol fMRI_Task_Control_1
  • Excluding protocol fMRI_Task_Control_2
  • Excluding protocol fMRI_Task_Learn_1
  • Excluding protocol fMRI_Task_Learn_2
  • Excluding protocol fMRI_Task_Learn_3
  • Excluding protocol fMRI_Task_Learn_4
  • Excluding protocol gre_field_mapping
  • Excluding protocol gre_field_mapping
  • Excluding protocol gre_field_mapping
  • Excluding protocol localizer
  • Excluding protocol localizer
  • Excluding protocol localizer
  • Excluding protocol t1_mpr_ns_sag_p2_iso
    Preserving conversion directory

The protocol names appear to match the json file, so I'm not sure why they're being excluded. Please let me know if I can provide any other information.

Thanks!

handling of multi-echo MPRAGE

it would be good to export the different echos -- in a manner similar to the one currently used for GE field maps

Better handling of multiple scans with same description (e.g., multiple fMRI runs)

Right now, if multiple scans have the same DICOM description, only one of them ends up in the BIDS directory. You can use the --use-run flag as a work-around, but then everything gets an appended number and the number is rather arbitrary. Ideally, it should detect multiple scans with the same name and, for these, append -01 -02 or some such code.

One solution is in the pull-request I put up in #13. I've given it a quick test on some data here and there's one tweak I may still do to it. But, there could be other fine solutions too. Either way, though, the current behavior of only producing one of the files is not ideal.

run order not preserved

Hi Mike,

Run order is not preserved in the latest version. See here:

image

The far right column shows how many volumes are in the image, so we know which file is which. Run 13 from the first pass becomes Run 2, while Run 15 becomes Run 1. It flips.

Separately, the latest version also adds run-01 to everything, even when there is only one run. This could arguably be a bug, or a feature :-) a previous version, however, did not add run number unless there was multiple runs. I don't have much of a preference, just wanted to let you know

Thank you!
Yoni

run order not preserved

Hi Mike,

We finished data collection and are using bidskit again. I am still seeing the behavior that run order is not always preserved by bidskit, as in #32 . For example:

In the dicom dir, I have:

joas2631@blogin01:$ ls acute_pa_32ch_mb8_v01_r01_0023/|wc
    109     109    2616  
joas2631@blogin01:$ ls acute_pa_32ch_mb8_v01_r01_0025/|wc
    816     816   19584

So 109 volumes for series 23, and 816 volumes for series 25 (there was a problem w/ the run and we had to start it over part-way through) -- the order switches!.

But in the bids directory created by bidskit, I see:

joas2631@blogin01:$ lsi sub-M80371745/ses-1/func/*nii
sub-M80371745/ses-1/func/sub-M80371745_ses-1_task-acute_run-01_bold.nii        82   82   56  816
sub-M80371745/ses-1/func/sub-M80371745_ses-1_task-acute_run-02_bold.nii        82   82   56  109

So the 109 volume run becomes run-02, while the 816 volume run becomes run-01.

I am using the latest version (1.1.2).

If it would be helpful to provide the data, glad to - just lmk

thanks!
Yoni Ashar

Empty output directory and .json file

For our project we would like to share our MRI data and therefore put the data into BIDS-format.
We tried to use the command dcm2bids.py in the terminal on our computer (ubuntu with python 3).
However the output is not the output that we were expecting.
The command makes an output folder and a .json file but both are empty.
Could you help us to find the reason why both folder and file are empty?

Kelly Berckmans - WP
Dept of Psychology and data analysis
Henri Dunantlaan 1
BE-9000 Ghent Belgium​
[email protected]

Add parallelization

Some of the individual conversion steps are trivially parallelizable. Maybe create custom nipype class to wrap dcm2bids functions, then nipype can handle the job submission to our favorite grid engine.

events.tsv not properly created

Hi,

I have used dcm2bids.py to convert DICOM files from ABCD raw fMRI data into NIfTI files in BIDS format. Image(nii.gz) and JSON files were successfully created, but I have a problem with the 'events.tsv' file for each subject and run. It seems that running dcm2bids.py module has no problem of creating the event file, but it doesn't write any values after creating columns in the file (onset, duration, trial_type, and response_time). As I believe no issue has been reported on this matter, I was wondering if I was supposed to fill in those event time values manually. If not, could you advise me on how I can retrieve the needed information from DICOM event-related text files and fixing the corresponding events.tsv files? I am using ABCD raw task fMRI data and have already converted all into NIfTI files in BIDS, which passed the BIDS validator (meaning that the events.tsv files are present but just empty except the column names). Thank you!

Error with first pass

Hi, I'm running the following command:

docker run -it -v /Users/bryanjackson/Project/:/STERN jmtyszka/bidskit -d /STERN

and receive the following error:

node: bad option: -d

My folder is structured like this:

Project/sourcedata/subjs/DICOMS

I've rebuilt the container already and am still receiving the error.

empty output with docker only

I've been running dcm2bids.py with a local installation without any trouble, but today I tried to use the docker image and my input files are not being detected so my output is empty.

Locally I run: ~/bidskit/dcm2bids.py -i /Users/josephorr/Google\ Drive\ File\ Stream/My\ Drive/DiffusionPilot/DICOM -o /Users/josephorr/Google\ Drive\ File\ Stream/My\ Drive/DiffusionPilot/BIDS

With docker I run: docker run -it -v /Users/josephorr/Google\ Drive\ File\ Stream/My\ Drive/DiffusionPilot/DICOM/:/mnt rnair07/bidskit --indir=/mnt/dicom --outdir=/mnt/source

This is my first attempt at using docker, so I'm not sure if I'm running it correctly. Thanks for any insights.

DICOMDIR as input

Hello,

I have a dataset from a Philips scanner which is in DICOMDIR format. Is this anyway compatible with bidskit or is there any conversion you know which can make it work?

I tried placing the DICOMDIR as sourcedata but no series are identified in Protocol_Translator.json.

Thank you in advance,

IntendedFor should include ses-XX prefix

Hi,

Related to the issue I previously opened, but much simpler. Also is regarding the values in the IntendedFor field in the JSON sidecar for the fmaps.

BIDS spec requires that these values include ses-XX if the dataset has sessions. E.g., the following is correct for datasets with multiple sessions:
"IntendedFor":"ses-1/func/sub-M80302134_ses-1_task-bladder_bold.nii.gz"

bidskit currently does not prepend the ses-XX.

A simple fix that works in my testing is editting the function bids_build_intendedfor to have:

    # iff the bids prefix contains 'ses', prepend ses-X to the IntendedFor string                                                                                                            
    if splt[1][0:3] == 'ses':
        ifstr = os.path.join(splt[1], "func", bids_prefix + bids_suffix + ".nii.gz")
    else:
        ifstr = os.path.join("func", bids_prefix + bids_suffix + ".nii.gz")

    return ifstr

Am not submitting this through a pull request just yet because it may be more complicated in light of the previous issue I just opened: #36 But just wanted to bring this to your awareness now, as it relates to that previous issue

thank you!!

IntendedFor field in json does not have 'run-0X' in filename

Hi Mike,

Since I know you are working on all this stuff now, wanted to add one more thing to the mix. In the json files accompanying the field maps, the IntendedFor field points to the functional run names -- but without with run-0X "suffix".

In other words: current version of bidskit appends run-0X to all functional runs (which is OK/BIDS-compliant), but the functional filenames in the IntendedFor fields do not include the run-0X, and thus point to a non-existent file.

Thank you thank you!
Yoni

clarification on sourcedata requirements

I'm confused by what sourcedata requirements there are. I have:

sourcedata
└── 027
    ├── fieldmap_BOLD_1
    ├── fTRT_1
    ├── Localizers_1
    ├── MPRage_1
    ├── PresentationStates_1
    ├── Survey_1
    ├── unnamed_1
    └── ZShimRef_1

(each of those contains a bunch of dicom images).

Those are the names the scanner gives those directories. Do I have to rename them in order to use bidskit?

------------------------------------------------------------
Processing subject 027
------------------------------------------------------------
* Looking at Localizers_1
* Subject/session names cannot contain "-" or "_"
* Please rename the subject/session folder in the sourcedata directory and rerun bidskit

naming issues with sbref

I have singleband reference images (one per multiband func run) that I'm trying to convert, but I'm running into naming trouble. I've setup Protocol_Translator as follows:

"ritl_pa_32ch_mb8_v01_r01_SBRef":[
        "func",
        "task-ritl_acq-SB_run-01_sbref",
        "task-ritl_acq-MB_run-01_bold"
    ],
    "ritl_pa_32ch_mb8_v01_r01":[
        "func",
        "task-ritl_acq-MB_run-01_bold",
        "UNASSIGNED"
    ],

But they are being named as if the sbref are duplicates of the bold images:
sub-M80301092_task-ritl_acq-MB_run-01_run-01_bold.nii.gz
sub-M80301092_task-ritl_acq-MB_run-02_run-01_bold.nii.gz
sub-M80301092_task-ritl_acq-MB_run-03_run-01_bold.nii.gz
sub-M80301092_task-ritl_acq-MB_run-04_run-01_bold.nii.gz
sub-M80301092_task-ritl_acq-MB_run-05_run-01_bold.nii.gz
sub-M80301092_task-ritl_acq-MB_run-06_run-01_bold.nii.gz
sub-M80301092_task-ritl_acq-SB_run-01_run-02_sbref.nii.gz
sub-M80301092_task-ritl_acq-SB_run-02_run-02_sbref.nii.gz
sub-M80301092_task-ritl_acq-SB_run-03_run-02_sbref.nii.gz
sub-M80301092_task-ritl_acq-SB_run-04_run-02_sbref.nii.gz
sub-M80301092_task-ritl_acq-SB_run-05_run-02_sbref.nii.gz
sub-M80301092_task-ritl_acq-SB_run-06_run-02_sbref.nii.gz

I've tried w/o the IntendedFor field and I get the same result. Is this a bug/issue or am I defining them incorrectly?

Thanks,
Joe

Multiple passes overwrite existing files

Now that we can add new subjects and re-run bidskit with the goal of just processing the new subjects, I've discovered that doing so over-writes existing subjects' data. It's not such a huge deal for the .nii files as they should recreate the same, but tweaks to the .json files and the .tsv files certainly shouldn't be altered without asking. (I'm currently restoring a whole bunch of *events.tsv and .json files from my backup...)

I would suggest that it check to see if the subject/session output directory exists and, if so, skips that subject/session entirely unless an "--overwrite" or "--reprocess" flag is set. Alternatively, it could do the check on each output it's trying to make. But, overwriting our events.tsv files is ... sub-optimal.

Automatic update of protocol_translator when adding new sourcedata?

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

BIDSKIT works great and I am very happy with this incredibly helpful tool! I work with heterogenous clinical datasets from multiple scanners; each DCM2NIIx conversion produces unique titles to their protocols. I've found that once I run bidskit (1st and 2nd pass) successfully, if I then add new sourcedata it is not "seen" or updated in the protocol_translator file. If I delete the protocol_translator file and re-run bidskit, I have to re-do the whole protocol translator again.

Describe the solution you'd like
A clear and concise description of what you want to happen.

Is there anyway to make the protocol_translator a dynamic file such that I can keep re-running bidskit after entering new sourcedata? Thank you!

ImportError: No module named 'bids'

Describe the bug
When I run docker run -it -v /dataset/:/dataset jmtyszka/bidskit bidskit -d /dataset, there is a ImportError:

Traceback (most recent call last):
  File "/usr/local/bin/bidskit", line 11, in <module>
    load_entry_point('bidskit==2019.8.16', 'console_scripts', 'bidskit')()
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 561, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2631, in load_entry_point
    return ep.load()
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2291, in load
    return self.resolve()
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2297, in resolve
    module = __import__(self.module_name, fromlist=['__name__'], level=0)
  File "/usr/local/lib/python3.5/dist-packages/bidskit-2019.8.16-py3.5.egg/bidskit/__main__.py", line 42, in <module>
  File "/usr/local/lib/python3.5/dist-packages/bidskit-2019.8.16-py3.5.egg/bidskit/translate.py", line 36, in <module>
ImportError: No module named 'bids'

More than one sequence with the same name

In my protocol I have two repetitions of a SpinEcho fieldmap sequence pair, which have the same sequence name "SpinEchoFieldMap_AP" and "SpinEchoFieldMap_PA". As a consequence, bidskit only identifies one sequence pair in Protocol_Translator.json.

"SpinEchoFieldMap_AP":[ "EXCLUDE_BIDS_Directory", "EXCLUDE_BIDS_Name", "UNASSIGNED" ], "SpinEchoFieldMap_PA":[ "EXCLUDE_BIDS_Directory", "EXCLUDE_BIDS_Name", "UNASSIGNED"

How could I handle this situation? I would like both pairs to be identified and included in the BIDS directory, with different run- fields.

Thank you in advance,

source & derivatives naming scheme

I wonder if it would make more sense to change the naming scheme of the various folders, so that the final folder structure in the dataset is more BIDS-like. From the BIDS specification, eventually the dicoms should go in a folder called "sourcedata". The BIDS-compliant subjects/sessions should be at the top level of the dataset directory. I would also suggest to move the "Protocol_Translator.json" from derivatives/conversion to the "code" folder. Here is what this would look like:

myDataset

(INPUT FOR BIDSKIT dcm2bids.py)

  • sourcedata
    • sub-myFirstSub
      • ses-myFirstSes
        • firstDicomSeries
          ....
        • lastDicomSeries
          ...
          ...
    • sub-myLastSub
      • ses-myFirstSes
        ...

(OUTPUTS OF BIDSKIT dcm2bids.py)

  • derivatives
    (empty)
  • working
    (outputs of dcm2niix, temporary)
  • code
    • Protocol_Translator.json
  • sub-myFirstSub
    • ses-myFirstSes
      ...
      ...
  • sub-myLastSub
    • ses-myFirstSes
      ...
  • participants.tsv
  • dataset_description.json

problem with dual gradient echo fieldmap

Hi Mike,

Thanks for this toolbox!
We are having an issue with fieldmaps (Siemens dual gradient echo). We have four functional runs and one fieldmap for each. Everything seems to be working for one subject, here are the files in the work directory for one of the runs:

work/conversion/sub-pilot02/ses-first/

de44ea9386c2e21e2a39718bc121a94d2a0ee73a--OBIWAN_FIELD_MAP--GR--12_e2.json
de44ea9386c2e21e2a39718bc121a94d2a0ee73a--OBIWAN_FIELD_MAP--GR--12_e2.nii.gz
de44ea9386c2e21e2a39718bc121a94d2a0ee73a--OBIWAN_FIELD_MAP--GR--12.json
de44ea9386c2e21e2a39718bc121a94d2a0ee73a--OBIWAN_FIELD_MAP--GR--12.nii.gz
de44ea9386c2e21e2a39718bc121a94d2a0ee73a--OBIWAN_FIELD_MAP--GR--13.json
de44ea9386c2e21e2a39718bc121a94d2a0ee73a--OBIWAN_FIELD_MAP--GR--13.nii.gz
de44ea9386c2e21e2a39718bc121a94d2a0ee73a--OBIWAN_RUN4--EP--14.json
de44ea9386c2e21e2a39718bc121a94d2a0ee73a--OBIWAN_RUN4--EP--14.nii.gz

And the files in the BIDS output directory for two runs:

BIDS/sub-pilot02/ses-first/fmap/

sub-pilot02_ses-first_run-01_acq-task_phasediff.json
sub-pilot02_ses-first_run-01_acq-task_phasediff.nii.gz
sub-pilot02_ses-first_run-02_acq-task_magnitude.nii.gz
sub-pilot02_ses-first_run-04_acq-task_phasediff.json
sub-pilot02_ses-first_run-04_acq-task_phasediff.nii.gz
sub-pilot02_ses-first_run-05_acq-task_magnitude.nii.gz

From what we can tell, the numbering is odd because a run number is being used for the echo 2 magnitude which is being discarded.

However for another subject with the exact same DICOM file structure, we are encountering a problem. Here is the same run in the work directory after the first pass:

work/conversion/sub-pilot03/ses-first/

2da3451ba5c5b36e641f3f0f82ae778dde5bfb95--OBIWAN_FIELD_MAP--GR--12_e2.json
2da3451ba5c5b36e641f3f0f82ae778dde5bfb95--OBIWAN_FIELD_MAP--GR--12_e2.nii.gz
2da3451ba5c5b36e641f3f0f82ae778dde5bfb95--OBIWAN_FIELD_MAP--GR--12.json
2da3451ba5c5b36e641f3f0f82ae778dde5bfb95--OBIWAN_FIELD_MAP--GR--12.nii.gz
2da3451ba5c5b36e641f3f0f82ae778dde5bfb95--OBIWAN_FIELD_MAP--GR--13_e2.json
2da3451ba5c5b36e641f3f0f82ae778dde5bfb95--OBIWAN_FIELD_MAP--GR--13_e2.nii.gz
2da3451ba5c5b36e641f3f0f82ae778dde5bfb95--OBIWAN_RUN4--EP--14.json
2da3451ba5c5b36e641f3f0f82ae778dde5bfb95--OBIWAN_RUN4--EP--14.nii.gz

The series for the phasediff files is being named differently ('13_e2' instead of '13'), and at the second pass we get the following error:

Organizing OBIWAN_FIELD_MAP
    Identifying fieldmap image type
    GRE detected
    Identifying magnitude and phase images
Traceback (most recent call last):
  File "dcm2bids.py", line 944, in <module>
    main()
  File "dcm2bids.py", line 233, in main
    bids_run_conversion(work_conv_dir, first_pass, prot_dict, bids_src_ses_dir, SID, SES, overwrite)
  File "dcm2bids.py", line 353, in bids_run_conversion
    overwrite)
  File "dcm2bids.py", line 437, in bids_purpose_handling
    TE1, TE2 = bids_fmap_echotimes(work_json_fname)
  File "dcm2bids.py", line 758, in bids_fmap_echotimes
    mag1_ser_no = str(int(ser_no) - 1)
ValueError: invalid literal for int() with base 10: '13_e2'

If we manually modify the name of the series to match the first subject (removing the '_e2') in the work/conversion/ directory, the second pass works with no errors.
Do you have any idea why this is happening? Also, is there a way to have the output BIDS fmap files not skip over numbers? We are having this issue both with the newest and previous release of dcm2bids.

Thanks,
Lavinia

Docker build . fails in pybids install

Installed /usr/local/lib/python3.5/dist-packages/pybids-0.10.0-py3.5.egg
Searching for numpy>=1.15.2
Reading https://pypi.python.org/simple/numpy/
Downloading https://files.pythonhosted.org/packages/21/94/5d48401d922ad494399f74a973445d831c888ef0cd9437a4276d8a63cfe5/numpy-1.18.0rc1.zip#sha256=7b0b915190cf60e691c17147f5d955e273d4c482b795a7bb168ad4a2fe2fb180
Best match: numpy 1.18.0rc1
Processing numpy-1.18.0rc1.zip
Writing /tmp/easy_install-sxux54o6/numpy-1.18.0rc1/setup.cfg
Running numpy-1.18.0rc1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-sxux54o6/numpy-1.18.0rc1/egg-dist-tmp-z1qc6e0y
Processing numpy/random/_bounded_integers.pxd.in
Processing numpy/random/_mt19937.pyx
�[91mTraceback (most recent call last):
  File "/tmp/easy_install-sxux54o6/numpy-1.18.0rc1/tools/cythonize.py", line 61, in process_pyx
    from Cython.Compiler.Version import version as cython_version
ImportError: No module named 'Cython'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/tmp/easy_install-sxux54o6/numpy-1.18.0rc1/tools/cythonize.py", line 238, in <module>
    main()
  File "/tmp/easy_install-sxux54o6/numpy-1.18.0rc1/tools/cythonize.py", line 234, in main
    find_process_files(root_dir)
  File "/tmp/easy_install-sxux54o6/numpy-1.18.0rc1/tools/cythonize.py", line 225, in find_process_files
    process(root_dir, fromfile, tofile, function, hash_db)
  File "/tmp/easy_install-sxux54o6/numpy-1.18.0rc1/tools/cythonize.py", line 191, in process
    processor_function(fromfile, tofile)
  File "/tmp/easy_install-sxux54o6/numpy-1.18.0rc1/tools/cythonize.py", line 66, in process_pyx
    raise OSError('Cython needs to be installed in Python as a module')
OSError: Cython needs to be installed in Python as a module
�[0m�[91mRunning from numpy source directory.
/tmp/easy_install-sxux54o6/numpy-1.18.0rc1/setup.py:425: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates
  run_build = parse_setuppy_commands()
�[0mCythonizing sources
�[91mTraceback (most recent call last):
  File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 157, in save_modules
    yield saved
  File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 198, in setup_context
    yield
  File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 255, in run_setup
    DirectorySandbox(setup_dir).run(runner)
  File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 285, in run
    return func()
  File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 253, in runner
    _execfile(setup_script, ns)
  File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 47, in _execfile
    exec(code, globals, locals)
  File "/tmp/easy_install-sxux54o6/numpy-1.18.0rc1/setup.py", line 450, in <module>
  File "/tmp/easy_install-sxux54o6/numpy-1.18.0rc1/setup.py", line 433, in setup_package
  File "/tmp/easy_install-sxux54o6/numpy-1.18.0rc1/setup.py", line 240, in generate_cython
RuntimeError: Running cythonize failed!

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "setup.py", line 204, in <module>
    'Source': 'https://github.com/jmtyszka/bidskit/',
  File "/usr/lib/python3.5/distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/usr/lib/python3.5/distutils/dist.py", line 955, in run_commands
    self.run_command(cmd)
  File "/usr/lib/python3.5/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "/usr/lib/python3/dist-packages/setuptools/command/install.py", line 67, in run
    self.do_egg_install()
  File "/usr/lib/python3/dist-packages/setuptools/command/install.py", line 117, in do_egg_install
    cmd.run()
  File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 436, in run
    self.easy_install(spec, not self.no_deps)
  File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 680, in easy_install
    return self.install_item(None, spec, tmpdir, deps, True)
  File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 727, in install_item
    self.process_distribution(spec, dist, deps)
  File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 772, in process_distribution
    [requirement], self.local_index, self.easy_install
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 846, in resolve
    dist = best[req.key] = env.best_match(req, ws, installer)
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 1118, in best_match
    return self.obtain(req, installer)
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 1130, in obtain
    return installer(requirement)
  File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 699, in easy_install
    return self.install_item(spec, dist.location, tmpdir, deps)
  File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 725, in install_item
    dists = self.install_eggs(spec, download, tmpdir)
  File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 906, in install_eggs
    return self.build_and_install(setup_script, setup_base)
  File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1145, in build_and_install
    self.run_setup(setup_script, setup_base, args)
  File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1131, in run_setup
    run_setup(setup_script, args)
  File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 258, in run_setup
    raise
  File "/usr/lib/python3.5/contextlib.py", line 77, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 198, in setup_context
    yield
  File "/usr/lib/python3.5/contextlib.py", line 77, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 169, in save_modules
    saved_exc.resume()
  File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 144, in resume
    six.reraise(type, exc, self._tb)
  File "/usr/lib/python3/dist-packages/pkg_resources/_vendor/six.py", line 685, in reraise
    raise value.with_traceback(tb)
  File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 157, in save_modules
    yield saved
  File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 198, in setup_context
    yield
  File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 255, in run_setup
    DirectorySandbox(setup_dir).run(runner)
  File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 285, in run
    return func()
  File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 253, in runner
    _execfile(setup_script, ns)
  File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 47, in _execfile
    exec(code, globals, locals)
  File "/tmp/easy_install-sxux54o6/numpy-1.18.0rc1/setup.py", line 450, in <module>
  File "/tmp/easy_install-sxux54o6/numpy-1.18.0rc1/setup.py", line 433, in setup_package
  File "/tmp/easy_install-sxux54o6/numpy-1.18.0rc1/setup.py", line 240, in generate_cython
RuntimeError: Running cythonize failed!
�[0mThe command '/bin/sh -c python3 setup.py install' returned a non-zero code: 1

not seeing duplicate runs

Hi
Thanks for your recent updates! There is a bug such that duplicate runs are not being caught. For example:

Under work/conversion/sub-M80395860/ses-2/ I have the following files:

m80395860--acute_pa_32ch_mb8_v01_r01--EP--23.json
m80395860--acute_pa_32ch_mb8_v01_r01--EP--23.nii.gz
m80395860--acute_pa_32ch_mb8_v01_r01--EP--25.json
m80395860--acute_pa_32ch_mb8_v01_r01--EP--25.nii.gz
m80395860--acute_pa_32ch_mb8_v01_r01_SBRef--EP--22.json
m80395860--acute_pa_32ch_mb8_v01_r01_SBRef--EP--22.nii.gz
m80395860--acute_pa_32ch_mb8_v01_r01_SBRef--EP--24.json
m80395860--acute_pa_32ch_mb8_v01_r01_SBRef--EP--24.nii.gz

Notice the duplicates.

In the output directory, under bidskit_out/sub-M80395860/ses-2/func/, I get only the following files:

sub-M80395860_ses-2_task-acute_run-02_bold.json
sub-M80395860_ses-2_task-acute_run-02_bold.nii.gz
sub-M80395860_ses-2_task-acute_run-02_events.tsv
sub-M80395860_ses-2_task-acute_run-02_sbref.json
sub-M80395860_ses-2_task-acute_run-02_sbref.nii.gz

So it correctly sees that there are 2 runs, because it adds run-02. But it does not copy over the run-01 files.

ALSO: the run order is not preserved by bidskit. I have Runs 23 and 25 in the coversion dir. Bidskit renames run 23 (not run 25) to become run-02. It would greatly help if order was preserved (i.e., the earlier number in the sequence becomes run-01, the next number becomes run-02, etc.).

Thank you!!
Yoni (CU Boulder Wager lab)

subject flag

Is your feature request related to a problem? Please describe.
not really a problem, more a convenience to run single subjects in the sourcedata folder (instead of all of them).

Describe the solution you'd like
Addition of subject flag (e.g. -s SUBJECT01) when calling bidskit to specify which subject in sourcedata folder to convert.

Subject/session names cannot contain "-" or "_"

I am trying the newest version of bidskit now. I have already used the older version. The first thing I noticed is that it does not accept subject names with "-" or "". The previous version could handle it, nit this one. Nearly every dataset that I have worked on has subject IDs with wither "" or "-". Changing this would require recoding the IDs across the whole dataset (cognitive data, etc), something that many collaborators would not agree with. Is there a way to change this? Thank you. For now I'll use the older version.

Issue with fieldmap conversion

When I try to convert fmaps form SIEMENS Scanner I get the following error:

docker run -it -v /home/demenzbild/Schreibtisch/TEST_HC_BIDS/:/mnt rnair07/bidskit --indir=/mnt/dicom --outdir=/mnt/source --no-sessions


DICOM to BIDS Converter

Software Version : 1.1.1
DICOM Root Directory : /mnt/dicom
BIDS Source Directory : /mnt/source
BIDS Derivatives Directory : /mnt/derivatives/conversion
Working Directory : /mnt/work/conversion
Use Session Directories : No
Overwrite Existing Files : No


Pass 2 : Populating BIDS source directory

Creating new dataset_description.json

Processing subject TAURESTHCm002007S4620

BIDS working subject directory : /mnt/work/conversion/sub-TAURESTHCm002007S4620
BIDS source subject directory : /mnt/source/sub-TAURESTHCm002007S4620

  • Excluding protocol Perfusion_Weighted
  • Excluding protocol Axial_3TE_T2_STAR
  • Excluding protocol Axial_MB_rsfMRI_(Eyes_Open)
  • Excluding protocol Axial_MB_DTI
  • Excluding protocol Sagittal_3D_FLAIR
    Organizing Field_Mapping
    Identifying fieldmap image type
    GRE detected
    Identifying magnitude and phase images
    Echo 1 magnitude
    Populating BIDS source directory
    Copying 007_S_4620--Field_Mapping--GR--9_e1.nii.gz to sub-TAURESTHCm002007S4620_run-01_task-rest_magnitude.nii.gz
  • Excluding protocol Axial_3TE_T2_STAR
  • Excluding protocol Axial_3TE_T2_STAR
    Organizing Field_Mapping
    Identifying fieldmap image type
    GRE detected
    Identifying magnitude and phase images
    Traceback (most recent call last):
    File "/app/dcm2bids.py", line 944, in
    main()
    File "/app/dcm2bids.py", line 233, in main
    bids_run_conversion(work_conv_dir, first_pass, prot_dict, bids_src_ses_dir, SID, SES, overwrite)
    File "/app/dcm2bids.py", line 353, in bids_run_conversion
    overwrite)
    File "/app/dcm2bids.py", line 437, in bids_purpose_handling
    TE1, TE2 = bids_fmap_echotimes(work_json_fname)
    File "/app/dcm2bids.py", line 758, in bids_fmap_echotimes
    mag1_ser_no = str(int(ser_no) - 1)
    ValueError: invalid literal for int() with base 10: '10_e2_ph'

Can you help me with that?
All the other sequences are correctely converted when I exclude the fmaps.

Thanks,
balotay

Integrate BIDS validator

Add a BIDS validator at the end of 2nd pass conversion to confirm compliance to BIDS 1.0.1
Lenient: use bids_validator from pybids
Strict: external call to node.js validator

handling of new subjects in dicom directory

it should be possible to add a subject/session to the dicom directory, then run dcm2bids; the expected behavior would be that dcm2bids recognizes which subject/sessions have not been processed yet and processes them according to the existing Protocol_Translator.json (instead of skipping all subjects/sessions who don't have the temporary conv directory).

Missing indent in dcm2bids.py line 279

While running bidskit for the first time today I got the error message "IndentationError: expected an indented block" in line 279 of dcm2bids.py. After manually inserting this indent, everything worked fine.

.gz files not actually gzipped

Hello!

I've been having a strange problem running bidskit on my data that I wasn't sure how to deal with. It appears that files are being renamed with the .gz extension but they are not actually being compressed. I run into this problem after running bidskit for the second time when files appear in my final bids directory. Do you have any guidance on what might be causing this issue?

Thank you!

Doesn't really do the conversion?

Hi, Mike,
Thanks for developing this toolbox.
I am new to docker and python, and trying to convert my raw data from Siemens Trio 3T to bids.
I tried the docker version, and it looks good, but there are no converted files.
Below is my code and the output in the power shell.
capture

Actually, the folders are created, but all these folders are empty:
capture1

Here is the structure of my dicom folder:
capture2

I even tried to copy the T1 and fMRI data to the session folder, but it is the same.

look forward to your reply, thanks

Best,
Chuan-Peng

docker conversion

Hi,

I´m trying to convert a test data set with the docker command but I always get a empty Protocol_Translaor.json file. Can you help me with this issue?

I´m using the following comand: docker run -it -v /Users/xyz/Desktop/Converting_BIDS_test/mydicom/:/mnt rnair07/bidskit --indir=/mnt/dicom --outdir=/mnt/source

folder structure of mydicoms:
-mydicoms
-subject1
-dcmfolder
-subject2
-dcmfolder

Thanks,
Tragus

Some info missing from func and fmap .json files

Here are some info that I noticed are not in the final .json file associated with each field map and each functional run:

  • the "IntendedFor" needs to include the full directory of where the corresponding functional run(s) associated with that field map are save: e.g. "IntendedFor":"func/sub-005_task-ObsLearn_run-01_bold.nii.gz" or "IntendedFor":"func/ses-1/sub-005_task-ObsLearn_run-01_bold.nii.gz". Right now having only "IntendedFor":"task-ObsLearn_run-01_bold.nii.gz" will appear as an error in the bids validator (and I am assuming fmriprep won't run). I guess this could be changed when updating the protocol translator, or during the second pass conversion.
  • the "TotalReadoutTime" is also not added, both for to the func and fmap .json. I understand that this can be different depending on the scanner and/or sequence, but it would be nice if there was a way to incorporate it?
  • in my case, I also wanted the "MultibandAccelerationFactor" to be included.

I wrote the following python script that all my subjects and all my run and adds all this information - in case that helps for fixing the issue within the dcm2bids.py scripts.

#!/usr/bin/env python
"""
Created on Mon Nov 27 15:50:50 2017
@author: Caroline
"""

import os
import sys
import argparse
import subprocess
import shutil
import json
import dicom
from glob import glob

def main():
    
    bids_dir = os.path.realpath('/home/ccharpen/ObsLearn/rawdata/rawBIDS/')

    for bids_sub_dir in glob(bids_dir + '/sub*/'):
        
        print('Subject Directory: %s' % bids_sub_dir)
        func_dir = os.path.join(bids_sub_dir, 'func')
        fmap_dir = os.path.join(bids_sub_dir, 'fmap')
        
        for run in range(8):
            
            func_run_name = glob(func_dir + '/sub*' + str(run+1) + '_bold.json')[0]
            run_dict = read_json(func_run_name)
            #for functional run .json file, add TotalReadoutTime and MultibandAccelerationFactor
            #those parameters are specific to that protocol - update accordingly
            if 'TotalReadoutTime' not in run_dict.keys():
                run_dict['TotalReadoutTime']=0.0432
            if 'MultibandAccelerationFactor' not in run_dict.keys():
                run_dict['MultibandAccelerationFactor']=4
            write_json(func_run_name,run_dict)
            
            #for field maps, add TotalReadoutTime and 'func/' in front of the "IntendedFor" filename
            fmap_pos_name = glob(fmap_dir + '/sub*-pos_run-0' + str(run+1) + '_epi.json')[0]
            pos_dict = read_json(fmap_pos_name)
            if 'TotalReadoutTime' not in pos_dict.keys():
                pos_dict['TotalReadoutTime']=0.0432
            tmp_pos_name = str(pos_dict['IntendedFor'])
            if 'func/' not in tmp_pos_name:
                new_pos_name = 'func/' + tmp_pos_name
                pos_dict['IntendedFor'] = new_pos_name
            write_json(fmap_pos_name,pos_dict)
    
            fmap_neg_name = glob(fmap_dir + '/sub*-neg_run-0' + str(run+1) + '_epi.json')[0]
            neg_dict = read_json(fmap_neg_name)
            if 'TotalReadoutTime' not in neg_dict.keys():
                neg_dict['TotalReadoutTime']=0.0432
            tmp_neg_name = str(neg_dict['IntendedFor'])
            if 'func/' not in tmp_neg_name:
                new_neg_name = 'func/' + tmp_neg_name
                neg_dict['IntendedFor'] = new_neg_name
            write_json(fmap_neg_name,neg_dict)  
        
    # Clean exit
    sys.exit(0)

def read_json(fname):
    """
    Safely read JSON sidecar file into a dictionary
    :param fname: string
        JSON filename
    :return: dictionary structure
    """

    try:
        fd = open(fname, 'r')
        json_dict = json.load(fd)
        fd.close()
    except:
        print('*** JSON sidecar not found - returning empty dictionary')
        json_dict = dict()

    return json_dict


def write_json(fname, meta_dict):
    """
    Write a dictionary to a JSON file
    :param fname: string
        JSON filename
    :param meta_dict: dictionary
        Dictionary
    :return:
    """
    with open(fname, 'w') as fd:
        json.dump(meta_dict, fd, indent=4, separators=(',', ':'))

# This is the standard boilerplate that calls the main() function.
if __name__ == '__main__':
    main()

NifTI files erroneously skipped when "* JSON sidecar not found" due to premature termination of a for loop

I am using v1.1.2; I see the same issue using both the docker container and the source script. I'm having an issue where some files which are correctly recognized on the first pass (based on the console output, the protocols for that subject are being added to the dictionary), but then are omitted in the second pass in assembling the BIDS data set. This appears to be due to a small logical error in dcm2bids.py.

If the ith file in the list of NifTI filelist has no JSON sidecar, then the for loop iterating over the filelist should move onto the next iteration of the for loop. Instead, it is prematurely breaking the loop entirely.

More specifically, I believe break should be continue in the following code section

            if not os.path.isfile(src_json_fname):
                print('* JSON sidecar not found : %s' % src_json_fname)
                break

In particular, I believe this happens when the ith file is the *_ADC.nii.gz" file being generated by dcm2niix for DWI files. It is creating both a "*.nii.gz" image and an "*_ADC.nii.gz"; the former has a corresponding JSON file but the latter does not, causing the for loop to be prematurely terminated due to a lack of a corresponding JSON sidecar. All of the NifTI files that come after that "*_ADC.nii.gz" file within the filelist (enumerate(filelist)) fail to be transferred over to the BIDS data set.

Apologies if I have made any mistakes in this.

Prescan DICOM files to generate Protocol_Translator

It's much faster to generate Protocol_Translator.json without dcm2niix conversion. Once the translator has been configured after the first pass, perform conversions and BIDS population on a subject/session basis.

IntendedFor should populate only if the file pointed at exists

Any advice how should we check if a file that IntendedFor points at exists or not. And only put in the fmap .json if it really exists. It often happens that some patients have missing task fMRI files but the Protocol_Translator has an entry for that points to it. This later create problems during validation.

Can we handle this easily at the dicom2bids.py level or you recommend other solution?

Consistent pydicom version

dcm2bids.py depends on pydicom 1.0.0.a1 which is the latest github release. However most platforms use 0.9.9 which is the latest PyPi version. The import names differ (v1 uses "import pydicom", v0.9.9 uses "import dicom"). This throws errors on any system not using the alpha release.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.