Giter VIP home page Giter VIP logo

dcmstack's People

Contributors

a-detiste avatar ag-bell avatar bmoloney avatar emollier avatar ghisvail avatar mih avatar moloney avatar pvelasco avatar satra avatar yarikoptic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

dcmstack's Issues

pytest errors

Hello.
Sorry I'm a totally dumb.

(Python 3.7)
If I run python setup.py test I get some warnings, but at the end Ran 136 tests in 3.420s OK
Same thing with nosetests.

But if I use pytest, I get 101 failed, 35 passed, 187 warnings in 4.06 seconds. Eventually you can see the output here: https://paste.centos.org/view/b4445e5a

Problems with large multisequence data

Hi,
I got a problematic dataset to convert coming from our 7T scanner. Not only it is big (5GB uncompressed) but also includes multiple sequences (and at least one of them is multivolume). There is no way to separate the sequences using filenames, but even if I try to sort them by sequence name taken from the dicom headers dcmstack fails (and it's not because it runs out of memory). I've been told that 7T is the future so it would be cool if dcmstack would support such datsets ;)

https://www.dropbox.com/s/s731ex9oaghx9i0/Vaso.tar.xz

ImageOrientationPatient and ImageType issues for creating stacks.

There were two issues which were preventing many exported DICOM folders from stacking properly for me with your package.

  1. For whatever reason, there is often a small amount of variation in ImageOrientationPatient in the least significant digits (usually < 1e-6) which would prevent images from the same series from stacking properly. I had to change stack_and_group to round this field to get stacking to work.

  2. Some scanners export derived and original images which are otherwise mostly indistinguishable by DICOM tags, which would cause stacking errors. This could be fixed by allowing lists to be included in keys for grouping (would need to be converted to a tuple when creating the key for the dict) and adding ImageType to the default group keys. (Adding ImageOrientationPatient to the group keys is also helpful for splitting survey series into the three appropriate stacks, similar in behavior to dcm2nii.)

Adding support for lists in keys should not be too hard (just requires tuple conversion for the list in the key).

I'm not enough of a DICOM expert to know how much ImageOrientationPatient can be safely rounded in general, but adding an option to round that field would make it easier to fix a lot of errors.

AttributeError: 'module' object has no attribute 'ignore_non_text_bytes'

I am looking into packaging dcmstack for NeuroDebian. Running the tests on Debian testing leads to two errors:

======================================================================
ERROR: test_extract.TestMetaExtractor.test_get_elem_key
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/mih/debian/dcmstack/dcmstack/test/test_extract.py", line 79, in test_get_elem_key
    ignore_rules = (extract.ignore_non_text_bytes,)
AttributeError: 'module' object has no attribute 'ignore_non_text_bytes'

======================================================================
ERROR: test_extract.TestMetaExtractor.test_get_elem_value
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/mih/debian/dcmstack/dcmstack/test/test_extract.py", line 88, in test_get_elem_value
    ignore_rules = (extract.ignore_non_text_bytes,)
AttributeError: 'module' object has no attribute 'ignore_non_text_bytes'

Is extract.py outdated in the current HEAD?

Thanks in advance!

Slice thickness calculation?

Hi, I presently trying to stack a number of DICOM files to a NifTi with the help of dcmstack.

In the NifTi header I get the following section (as read with AFNI's 3dinfo):

R-to-L extent:    -8.815 [R] -to-     9.121 [L] -step-     0.304 mm [ 60 voxels]
A-to-P extent:    -6.375 [A] -to-     2.325 [P] -step-     0.300 mm [ 30 voxels]
I-to-S extent:    -4.639 [I] -to-     4.861 [S] -step-     0.500 mm [ 20 voxels]

Or, in plain text:

pixdim          : [ -1.00000000e+00   3.03997457e-01   3.00000012e-01   5.00000000e-01
   1.50000000e+03   1.00000000e+00   1.00000000e+00   1.00000000e+00]

This is what I see in the headers of the DICOM files from which I perform the conversion:

(0018, 0050) Slice Thickness                     DS: '0.35'
(0028, 0030) Pixel Spacing                       DS: ['0.3', '0.3039974672']

Any ideas why dcmstack reads the x and y resolutions correctly, but not the z? Does it try to compute it maybe from something else?

Conversion of muti-channel multi-echo MRI data

Hi,

I am trying to convert a dual-echo multi-coil MRI dataset (in total 5D 58x58x40x2x32) from DICOMs to NIFTI using dcmstack.

My problem is that I seem not to be able to figure out the right command to set the correct vector-var and time-var arguments from the command line.

Details:
I would like the data to get sorted by the DICOM tags for CoilString (I think this vendor specific): (0x051, 0x0100f) and EchoTime: (0x0018, 0x0081). CoilStrings are strings running from '"A01" to "A32" in this case. EchoTime is a numeric value.
For conversion I tried (as indicated by the help string)

 dcmstack --embed-meta --time-var 0x0018_0x0081 --vector-var 0x0051_0x0100f datadir

as well as variants with putting the tags in extra ' '
Unfortunately this always results in the error

UserWarning: Error adding file {...} to stack: The image collides with one already in the stack

I check the DICOMs manually using pydicom and they seem intact and the tags are set correctly. Am I simply using it wrong/misunderstanding the meaning of the arguments? I also tried to give it the meta data keys "EchoTime" and "CsaSeries.CoilString" as arguments instead of the DICOM tags but that did not seem to make a difference.

I am happy to provide example data on request.

Many thanks for dcmstack! It is a great tool.

test error with Python3.8

Running the tests for v0.8 with nose 1.3.7, nibabel 3.0.1, and pydicom 1.4.1 on Linux (NixOS):

======================================================================
ERROR: test.test_dcmmeta.TestGetMeta.test_invalid_index
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/nix/store/c7sf35dvi8pffp5ynj3icfc1czgikbc8-python3.8-nose-1.3.7/lib/python3.8/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/build/source/test/test_dcmmeta.py", line 991, in test_invalid_index
    assert_raises(IndexError,
  File "/nix/store/k82p3h05vbca3k5f1in5iavgbqxq03bm-python3-3.8.2/lib/python3.8/unittest/case.py", line 816, in assertRaises
    return context.handle('assertRaises', args, kwargs)
  File "/nix/store/k82p3h05vbca3k5f1in5iavgbqxq03bm-python3-3.8.2/lib/python3.8/unittest/case.py", line 202, in handle
    callable_obj(*args, **kwargs)
TypeError: 'NoneType' object is not callable

----------------------------------------------------------------------

confusion about AcquisitionTime

There is a confusion possible about AcquisitionTime label as it seems that it stores only one value per volume (first slice) and does not store slice AcquisitionTime in ext['time']['slices'].

I was expecting that nii.get_meta('AcquisitionTime', (0,0,0,0)) and nii.get_meta('AcquisitionTime', (0,0,1,0)) would give different time values corresponding to the slice acquisition time.

How could I get such value stored into the nifti extension without storing it in 2D?

ValueError: could not convert string to float

Greetings--

Just trying out dcmstack --embed-meta option with a 4D dataset (3 space, one time) comprising many 2D DICOM slices acquired on a new Siemens Prisma 3T scanner. See attached compressed sample DICOM slice...

When I run:
dcmstack --embed-meta ./DICOM_dir
I get the error:
ValueError: could not convert string to float: Amadeus

In this case, the string Amadeus has been used by our MRI sysadmin to anonymize various patient-identifying fields such as:
StudyDescription: 'Amadeus'
PatientID: 'Amadeus'
PatientBirthDate: 'Amadeus'
PatientSex: 'Amadeus'
PatientAge: 'Amadeus'

Any help?

Tusen tack from Sweden...

Paul

1.2.752.24.5.2900739981.2018112833518.27259337-13-5-8y6tlt.dcm.zip

Explanation for json file, PhaseEncodingDirectionPositive

Hi, i have a question about this parameter in the json file that I generated, what doest it mean if the PhaseEncodingDirection is COL, this means the real phase encoding direction would be Posterior to Anterior(PA)???

Thanks in advance

Hao

parse_and_group error

when running parse and group - with master branch - i get an error whose end point is this:

error: unpack requires a string argument of length 2
> /software/python/anaconda/envs/devpype/lib/python2.7/site-packages/dcmstack-0.7.dev-py2.7.egg/dcmstack/extract.py(348)_get_elem_value()
    347             if elem.VM == 1:
--> 348                 return struct.unpack(unpack_vr_map[elem.VR], elem.value)[0]
    349             else:

when i peek i get:

ipdb> elem.VR
'US or SS'
ipdb> unpack_vr_map[elem.VR]
'H'
ipdb> elem.value 
'\x00\x10\x00\x00\x10\x00'
ipdb> elem.VM
1

Make reorientation optional

AFAIK Dcmstack reorients images so the affine is close to an identity matrix (and they look ok in fslview). This is a time and memory consuming process for big datasets. Could this become optional?

dcmstack --list-translators broken

Result:

% dcmstack --list-translators
Traceback (most recent call last):
  File "/usr/bin/dcmstack", line 9, in <module>
    load_entry_point('dcmstack==0.7.0.dev0', 'console_scripts', 'dcmstack')()
  File "/usr/lib/python2.7/dist-packages/dcmstack/dcmstack_cli.py", line 161, in main
    for translator in translators:
UnboundLocalError: local variable 'translators' referenced before assignment

Most likely, line 161 should be something like this:

for translator in extract.default_translators:

dicom COL,ROW vs nifti ROW,COL

The DICOM patient-centered coordinate system has its x-axis running right-to-left and its y-axis running anterior-posterior, swapped from the NIFTI standard. As far as I can tell, dcmstack does not adjust for this difference between dicom and nifti. This results in NIFTIs that are inconsistent with NIFTIs that were reconstructed with other software, such as Chris Rorden's dcm2nii.

Is this the intended behavior, to not swap rows and columns?

btw. <3 dcmstack. great stuff!

Problematic dataset

Quoting after @DerOrfa
"dcmstack . --file-ext ".IMA" --force-read --output-name remove
--output-ext ".nii" --dest-dir "/tmp/" -v --embed-meta" on it takes
ages, produces an lot of messages like

/afs/cbs.mpg.de/software/pythonlib/lib/python2.7/site-packages/dcmstack-0.7.dev-py2.7.egg/dcmstack/dcmstack.py:1044:
UserWarning: Error adding file
./HN3T110609.MR.RT_ANATOMY.13.189.2011.06.09.10.21.57.312500.111344242.IMA
to stack: data dtype "object" not recognized
(fn, str(e)))

... and eventually ends with no output.

Here's the dataset: https://dl.dropboxusercontent.com/u/412959/S13_MP2RAGE_5_3_TR5000_noGRAPPA_Inv2_PHS.tar.bz2

original dicom voxel order

Hello

I use dcmstack to convert the dicom with the .to_nifti function with voxel_order='LAS' but I get into trouble with diffusion data to get the correct diffusion direction. So for those cases I would like to write with the same orientation as dicom.
So I put voxel_order='' and try to convert a sagital acquisition.
The voxel order I expect (and the one give by dcm2nii) is :
[ 3 -1 2 4 ]
but I obtain with dcmstack : [ -3 -2 -1 4 ]
do you know why and if it is possible to make the order same as the acquisition ?

many thanks

Romain

proper way to install

What is the recommended way to install dcmstack? I downloaded using the git clone protocol. Then ran "python setup.py install". The behavior seemed to be that an egg was created in site-packages. Also, in my Anaconda/Scripts directory I see dcmstack-script.py and dcmstack.exe and dcmstack.exe.manifest.

The problem, when in python, I can't import dcmstack. It doesn't find it.
Also, when I try running easy_install on the egg, it says it can't find a setup script
in the egg. Thanks for any help, probably user error;)

Chris

Object of type DcmMetaExtension is not JSON serializable

I'm a bit confused to why the DcmMetaExtension is not JSON serializable.

The documentation states:

The dictionaries of summarized meta data are encoded with JSON.

I did expect the following to work:

json.dump(nifti_wrapper.meta_ext, configfile)

but I only get:

Object of type DcmMetaExtension is not JSON serializable

Fail to convert DICOM files

I have a problem converting a set of dicom files:

/scr/adenauer1/virtualenv/devel/local/lib/python2.7/site-packages/dcmstack-0.7.dev-py2.7.egg/dcmstack/dcmstack.py:1073: UserWarning: Error adding file /tmp/test/1.2.840.113619.2.134.1762873053.2072.1145902912.846_0006_000003_11467783150be0.v2 to stack: 
Traceback (most recent call last):
  File "/scr/adenauer1/virtualenv/devel/bin/dcmstack", line 9, in <module>
    load_entry_point('dcmstack==0.7.dev', 'console_scripts', 'dcmstack')()
  File "build/bdist.linux-x86_64/egg/dcmstack/dcmstack_cli.py", line 315, in main
  File "build/bdist.linux-x86_64/egg/dcmstack/dcmstack.py", line 851, in to_nifti
  File "build/bdist.linux-x86_64/egg/dcmstack/dcmstack.py", line 757, in get_data
  File "build/bdist.linux-x86_64/egg/dcmstack/dcmstack.py", line 650, in get_shape
dcmstack.dcmstack.InvalidStackError: The DICOM stack is not valid: No (non-dummy) files in the stack

Here you can find the DICOM files and expected nifti output (provided by dcm2nii) here: https://docs.google.com/uc?id=0B77zr9yIiKOTcDh4Y1hHdUdVUEE&export=download

Thanks in advance for looking into this!

Best,
Chris

non utf-8 strings

tag values not utf8 encoded (seems to be possible in data from Siemens scanner) causes crash in json serialization:

dcmmeta.pyc in _mangle(self, value)
    698     def _mangle(self, value):
    699         '''Go from runtime representation to extension data.'''
--> 700         return json.dumps(value, indent=4)
.
.
.
/usr/lib64/python2.6/json/encoder.pyc in _iterencode(self, o, markers)
    292                     and not (_encoding == 'utf-8')):
    293                 o = o.decode(_encoding)
--> 294             yield encoder(o)
    295         elif o is None:
    296             yield 'null'
UnicodeDecodeError: 'utf8' codec can't decode byte 0xe2 in position 2: invalid continuation byte

is there a way to force conversion ? or remove the tags that cannot convert to utf-8

dcmstach support for multi-EPI

I am using dcmstack via its nipype interface. I have reported my issues here, and it seems my problem stems from dcmstack not being able to deal with multi-EPI DICOMs automatically. As stated in the linked issue report, dcmstack works nicely with my anatomical DICOMS, for instance.

How can I do the conversion for my multi-EPIs with your software?

support for python 3

is there a plan to support python 3?

  File "/users/satra/miniconda3/envs/banda/lib/python3.5/site-packages/dcmstack/dcmstack.py", line 1054
    except Exception, e:

does not convert philips DTI dcm files

pointed dcmstack to directory of 1 DTI scan and got confused on 4th dimention. cmd:
dcmstack -v DTI_2p5DB_SOME_NOISE
Processing source directory DTI_2p5DB_SOME_NOISE
Found 1088 source files in the directory
Found 1 groups of DICOM images
Writing out stack to path DTI_2p5DB_SOME_NOISE/301-DTI_2.5DB_SOME_NOISE_SENSE.nii.gz
Traceback (most recent call last):
File "/home/toddr/Software/anaconda2/bin/dcmstack", line 9, in
load_entry_point('dcmstack', 'console_scripts', 'dcmstack')()
File "/home/toddr/Software/dcmstack/src/dcmstack/dcmstack_cli.py", line 324, in main
nii = stack.to_nifti(args.voxel_order, gen_meta)
File "/home/toddr/Software/dcmstack/src/dcmstack/dcmstack.py", line 857, in to_nifti
data = self.get_data()
File "/home/toddr/Software/dcmstack/src/dcmstack/dcmstack.py", line 763, in get_data
stack_shape = self.get_shape()
File "/home/toddr/Software/dcmstack/src/dcmstack/dcmstack.py", line 726, in get_shape
raise InvalidStackError("Unable to guess key for sorting the "
dcmstack.dcmstack.InvalidStackError: The DICOM stack is not valid: Unable to guess key for sorting the fourth dimension

Add support for tar files

Often DICOM files are bundled in tar files. It would be awesome if Dcmstack could support reading them directly.

Consider making me a collaborator on dcmstack?

Hi Brendan

Would you consider making me a collaborator on this repo, so I can do things like merge pull requests? I promise to check with you before doing anything remotely controversial...

Matthew

setup.py

sorry I am not expert of the setuptools but the current setup.py install only an egg file and so dcmstack is not imported (I am using a virtualenv and system package get imported instead of the preceding virtualenv one).
maybe changes has caused this problem.

Extendig metadata JSON with Brain Imaging Data Structure fields

Brain Imaging Data Structure (BIDS) is a new specification describing how a neuroimaging dataset should be organized and described. Part of the standard are JSON sidecar files with acquisition parameters essential for performing data analysis that are not present (or reliably reported) in the NIFTI header (see here for details). Such fields include but are not limited to:

  • EffectiveEchoSpacing
  • RepetitionTime
  • PhaseEncodingDirection
  • SliceTiming
  • SliceEncodingDirection
  • EchoTime

Some of those fields are part of DICOM Ontology and are directly accessible from standard DICOM headers (such as RepetitionTime and EchoTime) and some are not part of standard DICOM nomenclature and require extraction using vendor and sequence specific heuristics (for example PhaseEncodingDirection or EffectiveEchoSpacing). We aded them to the BIDS standard because they are necessary for data processing.

I would like to send a PR that would include code to extract those fields from DICOM metadata and putting them in the root of the embedded JSON file. For example:

{
    "RepetitionTime": 4.0,
    "SliceEncodingDirection": "z",
    "PhaseEncodingDirection": "x-",
    "EffectiveEchoSpacing": 0.00074,
    "global": {
        "const": {
            "SpecificCharacterSet": "ISO_IR 100",
            "ImageType": [
                "ORIGINAL",
                "PRIMARY",
                "M",
                "ND"
            ],
            "StudyTime": 69244.484,
            "SeriesTime": 71405.562,
            "Modality": "MR",
            "Manufacturer": "SIEMENS",
            "SeriesDescription": "2D 16Echo qT2",
            "ManufacturerModelName": "TrioTim",
            "ScanningSequence": "SE",
            "SequenceVariant": "SP",
            "ScanOptions": "SAT1",
            "MRAcquisitionType": "2D",
            "SequenceName": "se2d16",
            "AngioFlag": "N",
            "SliceThickness": 7.0,
            ...

Before I commit time to doing this I would love to hear your opinion. For some of the fields (such as PhaseEncodingDirection) I only have code to infer them for siemens scans, but this field can be just omitted when a different type of scan will be used. We can also make all of the BIDS fields optional controlled by a command line flag.

Paralellization and multithreading support?

Hi, seeing how grouping/conversion takes a lot of time, and I am still unable to use group_and_stack via nipype, I set out to paralellize my current dcmstack/for-loop workflow.

You can see the version I am refering to right now here.

Strangely, running a pool of size 2 or size 4, or 8, or 16 (4 cores on this machine), I see n marked speed improvement. Any idea why that could be? is your code already paralellizing stuff? Would group_and_stack (which I am not atm using) distribute its tasks over multiple threads?

Conversion overflow

Hi,
When converting some PET images I get negative intensities in the output NIfTI for voxels with intensities > 2^15 on the original DICOM (i.e., probably a signed int overflow). I believe this is probably an upstream issue (pydicom or nibabel), and I am happy to report there and track. I am hoping first to see your thoughts on the issue.

image

Below is the DICOM header data of an offending image for reference:


    Dicom-Meta-Information-Header
    Used TransferSyntax:
    (0002,0000) UL 180 # 4,1 File Meta Information Group Length
    (0002,0001) OB 00\01 # 2,1 File Meta Information Version
    (0002,0002) UI [1.2.840.10008.5.1.4.1.1.128] # 28,1 Media Storage SOP Class UID
    (0002,0003) UI [1.3.12.2.1107.5.2.38.51040.2016110110532420312500560] # 52,1 Media Storage SOP Instance UID
    (0002,0010) UI [1.2.840.10008.1.2.1] # 20,1 Transfer Syntax UID
    (0002,0012) UI [1.3.12.2.1107.5.2] # 18,1 Implementation Class UID
    (0002,0013) SH [MR_VB20P] # 8,1 Implementation Version Name

    Dicom-Data-Set
    Used TransferSyntax: 1.2.840.10008.1.2.1
    (0008,0005) CS [ISO_IR 100] # 10,1-n Specific Character Set
    (0008,0008) CS [ORIGINAL\PRIMARY\STATIC\AC] # 26,2-n Image Type
    (0008,0012) DA [20161101] # 8,1 Instance Creation Date
    (0008,0013) TM [105330.109000 ] # 14,1 Instance Creation Time
    (0008,0016) UI [1.2.840.10008.5.1.4.1.1.128] # 28,1 SOP Class UID
    (0008,0018) UI [1.3.12.2.1107.5.2.38.51040.2016110110532420312500560] # 52,1 SOP Instance UID
    (0008,0020) DA [20161101] # 8,1 Study Date
    (0008,0021) DA [20161101] # 8,1 Series Date
    (0008,0022) DA [20161101] # 8,1 Acquisition Date
    (0008,0023) DA [20161101] # 8,1 Content Date
    (0008,0030) TM [093452.812000 ] # 14,1 Study Time
    (0008,0031) TM [100405.000000 ] # 14,1 Series Time
    (0008,0032) TM [100405.000000 ] # 14,1 Acquisition Time
    (0008,0033) TM [105330.109000 ] # 14,1 Content Time
    (0008,0050) SH (no value) # 0,1 Accession Number
    (0008,0060) CS [PT] # 2,1 Modality
    (0008,0070) LO [SIEMENS ] # 8,1 Manufacturer
    (0008,0080) LO [Anonymous ] # 10,1 Institution Name
    (0008,0081) ST [Anonymous ] # 10,1 Institution Address
    (0008,0090) PN [Anonymous ] # 10,1 Referring Physician's Name
    (0008,1010) SH [Anonymous ] # 10,1 Station Name
    (0008,1030) LO [XX^XX] # 32,1 Study Description
    (0008,103e) LO [*Abd_MRAC_PET_AC Images ] # 24,1 Series Description
    (0008,1040) LO [Anonymous ] # 10,1 Institutional Department Name
    (0008,1050) PN [Anonymous ] # 10,1-n Performing Physician's Name
    (0008,1090) LO [Biograph_mMR] # 12,1 Manufacturer's Model Name
    (0008,1140) SQ (Sequence with defined length) # 306,1 Referenced Image Sequence
    (fffe,e000) na (Item with defined length)
    (0008,1150) UI [1.2.840.10008.5.1.4.1.1.4] # 26,1 Referenced SOP Class UID
    (0008,1155) UI [1.3.12.2.1107.5.2.38.51040.2016110109493929278108444] # 52,1 Referenced SOP Instance UID
    (fffe,e000) na (Item with defined length)
    (0008,1150) UI [1.2.840.10008.5.1.4.1.1.4] # 26,1 Referenced SOP Class UID
    (0008,1155) UI [1.3.12.2.1107.5.2.38.51040.2016110109564887525712923] # 52,1 Referenced SOP Instance UID
    (fffe,e000) na (Item with defined length)
    (0008,1150) UI [1.2.840.10008.5.1.4.1.1.4] # 26,1 Referenced SOP Class UID
    (0008,1155) UI [1.3.12.2.1107.5.2.38.51040.2016110109482656483807702] # 52,1 Referenced SOP Instance UID
    (0008,1250) SQ (Sequence with defined length) # 150,1 Related Series Sequence
    (fffe,e000) na (Item with defined length)
    (0020,000d) UI [1.3.12.2.1107.5.2.38.51040.30000016103122414446800000004] # 56,1 Study Instance UID
    (0020,000e) UI [1.3.12.2.1107.5.2.38.51040.2016110110021659897623931.0.0.0] # 58,1 Series Instance UID
    (0040,a170) SQ (Sequence with defined length) # 0,1 Purpose of Reference Code Sequence
    (0010,0010) PN [Anonymous ] # 10,1 Patient's Name
    (0010,0020) LO [unknown ] # 8,1 Patient ID
    (0010,0030) DA [XXXXXXXX] # 8,1 Patient's Birth Date
    (0010,0040) CS [X ] # 2,1 Patient's Sex
    (0010,1010) AS [000Y] # 4,1 Patient's Age
    (0010,1020) DS [0.00] # 4,1 Patient's Size
    (0010,1030) DS [00] # 2,1 Patient's Weight
    (0018,0050) DS [2.03125 ] # 8,1 Slice Thickness
    (0018,1000) LO [51040 ] # 6,1 Device Serial Number
    (0018,1020) LO [syngo MR B20P ] # 14,1-n Software Version(s)
    (0018,1030) LO [Abd_MRAC_PET] # 12,1 Protocol Name
    (0018,1181) CS [NONE] # 4,1 Collimator Type
    (0018,1200) DA [20161101] # 8,1-n Date of Last Calibration
    (0018,1201) TM [083427.000000 ] # 14,1-n Time of Last Calibration
    (0018,1210) SH [XYZGAUSSIAN4.00 ] # 16,1-n Convolution Kernel
    (0018,1242) IS [2700000 ] # 8,1 Actual Frame Duration
    (0018,5100) CS [HFS ] # 4,1 Patient Position
    (0020,000d) UI [1.3.12.2.1107.5.2.38.51040.30000016110123080754600001906] # 56,1 Study Instance UID
    (0020,000e) UI [1.3.12.2.1107.5.2.38.51040.2016110110532300000555] # 50,1 Series Instance UID
    (0020,0010) SH [1 ] # 2,1 Study ID
    (0020,0011) IS [51] # 2,1 Series Number
    (0020,0013) IS [1 ] # 2,1 Instance Number
    (0020,0032) DS [-358.49165142605\-359.76148695776\133.46359705925 ] # 50,3 Image Position (Patient)
    (0020,0037) DS [1\0\0\0\1\0 ] # 12,6 Image Orientation (Patient)
    (0020,0052) UI [1.3.12.2.1107.5.2.38.51040.2.20161101094730687.0.0.0] # 52,1 Frame of Reference UID
    (0020,1002) IS [127 ] # 4,1 Images in Acquisition
    (0020,1040) LO (no value) # 0,1 Position Reference Indicator
    (0020,1041) DS [133.464 ] # 8,1 Slice Location
    (0020,4000) LT [Anonymous ] # 10,1 Image Comments
    (0028,0002) US 1 # 2,1 Samples per Pixel
    (0028,0004) CS [MONOCHROME2 ] # 12,1 Photometric Interpretation
    (0028,0010) US 344 # 2,1 Rows
    (0028,0011) US 344 # 2,1 Columns
    (0028,0030) DS [2.08626\2.08626 ] # 16,2 Pixel Spacing
    (0028,0051) CS [NORM\DTIM\MLAAFOV\ATTN\3SCAT\RELSC\DECY\FLEN\RANSM\XYSM\ZSM ] # 60,1-n Corrected Image
    (0028,0100) US 16 # 2,1 Bits Allocated
    (0028,0101) US 16 # 2,1 Bits Stored
    (0028,0102) US 15 # 2,1 High Bit
    (0028,0103) US 0 # 2,1 Pixel Representation
    (0028,0106) US 0 # 2,1 Smallest Image Pixel Value
    (0028,0107) US 5611 # 2,1 Largest Image Pixel Value
    (0028,1050) DS [15041 ] # 6,1-n Window Center
    (0028,1051) DS [30081 ] # 6,1-n Window Width
    (0028,1052) DS [0 ] # 2,1 Rescale Intercept
    (0028,1053) DS [5.3612170781579 ] # 16,1 Rescale Slope
    (0028,1054) LO [US] # 2,1 Rescale Type
    (0028,1055) LO [Algo1 ] # 6,1-n Window Center & Width Explanation
    (0029,0010) LO [SIEMENS CSA HEADER] # 18,1 Private Creator
    (0029,0011) LO [SIEMENS MEDCOM HEADER ] # 22,1 Private Creator
    (0029,0012) LO [SIEMENS MEDCOM HEADER2] # 22,1 Private Creator
    (0029,1008) CS [PET NUM 4 ] # 10,1 CSA Image Header Type
    (0029,1009) LO [20161101] # 8,1 CSA Image Header Version
    (0029,1010) OB 53\56\31\30\04\03\02\01\1c\00\00\00\4d\00\00\00\50\72\6f\74\6f\63\6f\6c\53\6c\69\63\65\4e\75\6d\62\65\72\00\11\11\11\00\12\12\12\00\13\13\13\00\14\14\14\00\15\15\15\00\16\16\16\00\17\17\17\00 # 2888,1 CSA Image Header Info
    (0029,1018) CS [PT] # 2,1 CSA Series Header Type
    (0029,1019) LO [20161101] # 8,1 CSA Series Header Version
    (0029,1020) OB 53\56\31\30\04\03\02\01\26\00\00\00\4d\00\00\00\55\73\65\64\50\61\74\69\65\6e\74\57\65\69\67\68\74\00\00\00\00\02\00\00\6c\01\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\43\73\61\53 # 124932,1 CSA Series Header Info
    (0029,1120) OB 4d\00\45\00\44\00\43\00\4f\00\4d\00\20\00\48\00\49\00\53\00\54\00\4f\00\52\00\59\00\20\00\56\00\31\00\2e\00\31\00\00\00\43\00\73\00\61\00\50\00\61\00\74\00\69\00\65\00\6e\00\74\00\00\00\30\00 # 272,1 MedCom History Information
    (0029,1260) LO [com ] # 4,1 Series Workflow Status
    (0032,1060) LO [XX] # 32,1 Requested Procedure Description
    (0040,0244) DA [20161101] # 8,1 Performed Procedure Step Start Date
    (0040,0245) TM [093452.906000 ] # 14,1 Performed Procedure Step Start Time
    (0040,0253) SH [MR20161101093452] # 16,1 Performed Procedure Step ID
    (0040,0254) LO [XX^XX] # 32,1 Performed Procedure Step Description
    (0054,0013) SQ (Sequence with defined length) # 32,1 Energy Window Range Sequence
    (fffe,e000) na (Item with defined length)
    (0054,0014) DS [430 ] # 4,1 Energy Window Lower Limit
    (0054,0015) DS [610 ] # 4,1 Energy Window Upper Limit
    (0054,0016) SQ (Sequence with defined length) # 178,1 Radiopharmaceutical Information Sequence
    (fffe,e000) na (Item with defined length)
    (0018,0031) LO [Fluorodeoxyglucose] # 18,1 Radiopharmaceutical
    (0018,1071) DS [0 ] # 2,1 Radiopharmaceutical Volume
    (0018,1072) TM [093013.000000 ] # 14,1 Radiopharmaceutical Start Time
    (0018,1074) DS [384800000 ] # 10,1 Radionuclide Total Dose
    (0018,1075) DS [6586.2] # 6,1 Radionuclide Half Life
    (0018,1076) DS [0.97] # 4,1 Radionuclide Positron Fraction
    (0054,0300) SQ (Sequence with defined length) # 56,1 Radionuclide Code Sequence
    (fffe,e000) na (Item with defined length)
    (0008,0100) SH [C-111A1 ] # 8,1 Code Value
    (0008,0102) SH [SRT ] # 4,1 Coding Scheme Designator
    (0008,0104) LO [^18^Fluorine] # 12,1 Code Meaning
    (0054,0081) US 127 # 2,1 Number of Slices
    (0054,0410) SQ (Sequence with defined length) # 0,1 Patient Orientation Code Sequence
    (0054,0414) SQ (Sequence with defined length) # 0,1 Patient Gantry Relationship Code Sequence
    (0054,1000) CS [WHOLE BODY\IMAGE] # 16,2 Series Type
    (0054,1001) CS [BQML] # 4,1 Units
    (0054,1002) CS [EMISSION] # 8,1 Counts Source
    (0054,1100) CS [DLYD] # 4,1 Randoms Correction Method
    (0054,1101) LO [measured] # 8,1 Attenuation Correction Method
    (0054,1102) CS [START ] # 6,1 Decay Correction
    (0054,1103) LO [OP-OSEM3i21s] # 12,1 Reconstruction Method
    (0054,1104) LO [DESCRIPTION ] # 12,1 Detector Lines of Response Used
    (0054,1105) LO [Model-based ] # 12,1 Scatter Correction Method
    (0054,1200) DS [60] # 2,1 Axial Acceptance
    (0054,1201) IS [5\6 ] # 4,2 Axial Mash
    (0054,1300) DS [1318054.1176629 ] # 16,1 Frame Reference Time
    (0054,1321) DS [1.1488] # 6,1 Decay Factor
    (0054,1322) DS [1 ] # 2,1 Dose Calibration Factor
    (0054,1323) DS [42.0091 ] # 8,1 Scatter Fraction Factor
    (0054,1330) US 1 # 2,1 Image Index
    (7fe0,0010) OW 00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00 # 236672,1 Pixel Data

test_get_elem_value UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 5136: character maps to <undefined>

$> apt-cache policy python-chardet 
python-chardet:
  Installed: 3.0.4-1
  Candidate: 3.0.4-1
  Version table:
 *** 3.0.4-1 600
        600 http://http.debian.net/debian sid/main amd64 Packages
        600 http://http.debian.net/debian sid/main i386 Packages
        100 /var/lib/dpkg/status
     2.3.0-2 100
        100 http://http.debian.net/debian stretch/main amd64 Packages
        100 http://http.debian.net/debian stretch/main i386 Packages

$> nosetests -s -v test/test_extract.py:TestMetaExtractor.test_get_elem_value
test_extract.TestMetaExtractor.test_get_elem_value ... ERROR
 
======================================================================
ERROR: test_extract.TestMetaExtractor.test_get_elem_value
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/yoh/deb/gits/pkg-exppsy/dcmstack/test/test_extract.py", line 91, in test_get_elem_value
    value = extractor._get_elem_value(elem)
  File "/home/yoh/deb/gits/pkg-exppsy/dcmstack/src/dcmstack/extract.py", line 398, in _get_elem_value
    value = self.conversions[elem.VR](value)
  File "/home/yoh/deb/gits/pkg-exppsy/dcmstack/src/dcmstack/extract.py", line 295, in get_text
    return byte_str.decode(match['encoding'])
  File "/usr/lib/python2.7/encodings/cp1254.py", line 15, in decode
    return codecs.charmap_decode(input,errors,decoding_table)
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 5136: character maps to <undefined>
-------------------- >> begin captured logging << --------------------
chardet.charsetprober: DEBUG: SHIFT_JIS Japanese prober hit error at byte 240
chardet.charsetprober: DEBUG: EUC-JP Japanese prober hit error at byte 137
chardet.charsetprober: DEBUG: GB2312 Chinese prober hit error at byte 137
chardet.charsetprober: DEBUG: EUC-KR Korean prober hit error at byte 137
chardet.charsetprober: DEBUG: CP949 Korean prober hit error at byte 137
chardet.charsetprober: DEBUG: Big5 Chinese prober hit error at byte 137
chardet.charsetprober: DEBUG: EUC-TW Taiwan prober hit error at byte 137
chardet.charsetprober: DEBUG: windows-1251 Russian confidence = 0.0
chardet.charsetprober: DEBUG: KOI8-R Russian confidence = 0.01
chardet.charsetprober: DEBUG: ISO-8859-5 Russian confidence = 0.01
chardet.charsetprober: DEBUG: MacCyrillic Russian confidence = 0.01
chardet.charsetprober: DEBUG: IBM866 Russian confidence = 0.01
chardet.charsetprober: DEBUG: IBM855 Russian confidence = 0.01
chardet.charsetprober: DEBUG: ISO-8859-7 Greek confidence = 0.01
chardet.charsetprober: DEBUG: windows-1253 Greek confidence = 0.01
chardet.charsetprober: DEBUG: ISO-8859-5 Bulgairan confidence = 0.01
chardet.charsetprober: DEBUG: windows-1251 Bulgarian confidence = 0.0
chardet.charsetprober: DEBUG: TIS-620 Thai confidence = 0.01
chardet.charsetprober: DEBUG: ISO-8859-9 Turkish confidence = 0.355306617892
chardet.charsetprober: DEBUG: windows-1255 Hebrew confidence = 0.0
chardet.charsetprober: DEBUG: windows-1255 Hebrew confidence = 0.01
chardet.charsetprober: DEBUG: windows-1255 Hebrew confidence = 0.01
chardet.charsetprober: DEBUG: windows-1251 Russian confidence = 0.0
chardet.charsetprober: DEBUG: KOI8-R Russian confidence = 0.01
chardet.charsetprober: DEBUG: ISO-8859-5 Russian confidence = 0.01
chardet.charsetprober: DEBUG: MacCyrillic Russian confidence = 0.01
chardet.charsetprober: DEBUG: IBM866 Russian confidence = 0.01
chardet.charsetprober: DEBUG: IBM855 Russian confidence = 0.01
chardet.charsetprober: DEBUG: ISO-8859-7 Greek confidence = 0.01
chardet.charsetprober: DEBUG: windows-1253 Greek confidence = 0.01
chardet.charsetprober: DEBUG: ISO-8859-5 Bulgairan confidence = 0.01
chardet.charsetprober: DEBUG: windows-1251 Bulgarian confidence = 0.0
chardet.charsetprober: DEBUG: TIS-620 Thai confidence = 0.01
chardet.charsetprober: DEBUG: ISO-8859-9 Turkish confidence = 0.355306617892
chardet.charsetprober: DEBUG: windows-1255 Hebrew confidence = 0.0
chardet.charsetprober: DEBUG: windows-1255 Hebrew confidence = 0.01
chardet.charsetprober: DEBUG: windows-1255 Hebrew confidence = 0.01
--------------------- >> end captured logging << ---------------------

incorrect grouping from same set of sequences

@moloney - we have a dataset that's collected on a siemens trio with vb17 that's giving https://github.com/nipy/heudiconv some trouble.

specifically, all we need is dcmstack's parse and group functionality and then we hand over the conversion to any converter in batches.

https://github.com/nipy/heudiconv/blob/master/bin/heudiconv#L67

however in our dataset 2 of the 28 participants are showing the following patterns for some sequences:

  1. slices = slices * timepoints
e.g. slices = 128 * 10
5042    somefile.IMA    15-DIFFUSION_HighRes_PA_B0      -       -       -       128     128     
1280    1       9.72    84.0    DIFFUSION_HighRes_PA_B0 False
  1. timepoints = 2 * timepoints (either for structural or functional - so sequence independent behavior)
timepoints = 176 * 2 or 183 * 2
1766    somefile.IMA    3-MEMPRAGE_4e_p2_1mm_iso        -       -       -       256     256     
 352     1      2.53    2.1     MEMPRAGE_4e_p2_1mm_iso  False 
6232    somefile.IMA    22-localizer_ge_func_3.1x3.1x3.1_PACE   -       -       -       64      64      32     
 366     2.06    30.0    localizer_ge_func_3.1x3.1x3.1_PACE      False

and the 2 participants are not sequential.

any ideas as to what could be causing this?

and more generally we would like to simply have that functionality be an api call in dcmstack, without any nifti conversion. it should simply give us a list of series with grouped dicomfiles and their metadata.

voxel_order bug?

I am using the trunk version but it seems that a recent changes have made the voxel_order option unstable.
When using to_nifti_wrapper(voxel_order='RPI') on a DicomStack object and using to_filename('test.nii') and using afni 3dinfo on this file I get [-orient LAS] contrary to previous versions.
As there are many commits on the function dealing with orientation, there might be a problem introduced here, or maybe I did something wrong.

Excessive resource demands

I am trying to convert Philips DICOMs from a ~1 hour scan. About 36k single-slice DICOMs in a single directory, several image series together. The total size of the tarball is ~850MB (~160MB gzipped). I convert via the following call:

% dcmstack -v -d --dest-dir . --file-ext '' study_20...

These DICOMs have no file name extensions, hence the option.

At this point the process is running for 40 min and consumes 18GB of RAM. However, no files have been created yet, hence I assume it will keep going.

The memory consumption is more than 20 times the input data. This seems excessive. Any idea what is happening?

Thanks!

DICOM sort

Hi,

The automatic conversion tool is cool. I have a question about the DICOM sorting. In the documentation, ' It is recommended that you sort your DICOM data into directories (at least per study, but perferably by series) before conversion.' Do you mind to help me figure out how to sort the DICOM for series or for study? Is any software for this?

Thanks so much!

niftiwrapper IO

I ran into an issue with specific MPRAGE dicoms unenhanced from Philips 3D to 2D dicoms.
Using dcmstack, I am able to parse, create niftiwrapper and save it, but when trying to load it using nibabel.load or dcmstack.NiftiWrapper.from_filename it raise a JSON error:

 dcmstack.NiftiWrapper.from_filename('test.nii')
/home_local/bpinsard/.virtualenvs/processing_virtenv/lib/python2.6/site-packages/dcmstack-0.7.dev-py2.6.egg/dcmstack/dcmmeta.pyc in from_filename(klass, path)
   1480             The path to the Nifti file to load.
   1481         '''
-> 1482         return klass(nb.load(path))
   1483 
   1484     @classmethod

/home_local/bpinsard/.virtualenvs/processing_virtenv/lib/python2.6/site-packages/nibabel/loadsave.pyc in load(filename)
     52         else:
     53             klass =  spm2.Spm2AnalyzeImage
---> 54     return klass.from_filename(filename)
     55 
     56 

/home_local/bpinsard/.virtualenvs/processing_virtenv/lib/python2.6/site-packages/nibabel/spatialimages.pyc in from_filename(klass, filename)
    409     def from_filename(klass, filename):
    410         file_map = klass.filespec_to_file_map(filename)
--> 411         return klass.from_file_map(file_map)
    412 
    413     @classmethod

/home_local/bpinsard/.virtualenvs/processing_virtenv/lib/python2.6/site-packages/nibabel/analyze.pyc in from_file_map(klass, file_map)
    868         hdr_fh, img_fh = klass._get_fileholders(file_map)
    869         hdrf = hdr_fh.get_prepare_fileobj(mode='rb')
--> 870         header = klass.header_class.from_fileobj(hdrf)
    871         if hdr_fh.fileobj is None: # was filename
    872             hdrf.close()

/home_local/bpinsard/.virtualenvs/processing_virtenv/lib/python2.6/site-packages/nibabel/nifti1.pyc in from_fileobj(klass, fileobj, endianness, check)
    580             extsize = hdr._structarr['vox_offset'] - fileobj.tell()
    581         byteswap = endian_codes['native'] != hdr.endianness
--> 582         hdr.extensions = klass.exts_klass.from_fileobj(fileobj, extsize, byteswap)
    583         return hdr
    584 

/home_local/bpinsard/.virtualenvs/processing_virtenv/lib/python2.6/site-packages/nibabel/nifti1.pyc in from_fileobj(klass, fileobj, size, byteswap)
    497             # a particular extension type
    498             try:
--> 499                 ext = extension_codes.handler[ecode](ecode, evalue)
    500             except KeyError:
    501                 # unknown extension type

/home_local/bpinsard/.virtualenvs/processing_virtenv/lib/python2.6/site-packages/nibabel/nifti1.pyc in __init__(self, code, content)
    261             # XXX or fail or at least complain?
    262             self._code = code
--> 263         self._content = self._unmangle(content)
    264 
    265     def _unmangle(self, value):

/home_local/bpinsard/.virtualenvs/processing_virtenv/lib/python2.6/site-packages/dcmstack-0.7.dev-py2.6.egg/dcmstack/dcmmeta.pyc in _unmangle(self, value)
    694         if sys.version_info >= (2, 7):
    695             kwargs['object_pairs_hook'] = OrderedDict
--> 696         return json.loads(value, **kwargs)
    697 
    698     def _mangle(self, value):

/usr/lib64/python2.6/json/__init__.pyc in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, **kw)
    305             parse_int is None and parse_float is None and
    306             parse_constant is None and not kw):
--> 307         return _default_decoder.decode(s)
    308     if cls is None:
    309         cls = JSONDecoder

/usr/lib64/python2.6/json/decoder.pyc in decode(self, s, _w)
    317 
    318         """
--> 319         obj, end = self.raw_decode(s, idx=_w(s, 0).end())
    320         end = _w(s, end).end()
    321         if end != len(s):

/usr/lib64/python2.6/json/decoder.pyc in raw_decode(self, s, **kw)
    334         kw.setdefault('context', self)
    335         try:
--> 336             obj, end = self._scanner.iterscan(s, **kw).next()
    337         except StopIteration:
    338             raise ValueError("No JSON object could be decoded")

/usr/lib64/python2.6/json/scanner.pyc in iterscan(self, string, idx, context)
     53             action = actions[m.lastindex]
     54             if action is not None:
---> 55                 rval, next_pos = action(m, context)
     56                 if next_pos is not None and next_pos != matchend:
     57                     # "fast forward" the scanner

/usr/lib64/python2.6/json/decoder.pyc in JSONObject(match, context, _w)
    181         end = _w(s, end + 1).end()
    182         try:
--> 183             value, end = iterscan(s, idx=end, context=context).next()
    184         except StopIteration:
    185             raise ValueError(errmsg("Expecting object", s, end))

/usr/lib64/python2.6/json/scanner.pyc in iterscan(self, string, idx, context)
     53             action = actions[m.lastindex]
     54             if action is not None:
---> 55                 rval, next_pos = action(m, context)
     56                 if next_pos is not None and next_pos != matchend:
     57                     # "fast forward" the scanner

/usr/lib64/python2.6/json/decoder.pyc in JSONObject(match, context, _w)
    181         end = _w(s, end + 1).end()
    182         try:
--> 183             value, end = iterscan(s, idx=end, context=context).next()
    184         except StopIteration:
    185             raise ValueError(errmsg("Expecting object", s, end))

/usr/lib64/python2.6/json/scanner.pyc in iterscan(self, string, idx, context)
     53             action = actions[m.lastindex]
     54             if action is not None:
---> 55                 rval, next_pos = action(m, context)
     56                 if next_pos is not None and next_pos != matchend:
     57                     # "fast forward" the scanner

/usr/lib64/python2.6/json/decoder.pyc in JSONObject(match, context, _w)
    181         end = _w(s, end + 1).end()
    182         try:
--> 183             value, end = iterscan(s, idx=end, context=context).next()
    184         except StopIteration:
    185             raise ValueError(errmsg("Expecting object", s, end))

/usr/lib64/python2.6/json/scanner.pyc in iterscan(self, string, idx, context)
     53             action = actions[m.lastindex]
     54             if action is not None:
---> 55                 rval, next_pos = action(m, context)
     56                 if next_pos is not None and next_pos != matchend:
     57                     # "fast forward" the scanner

/usr/lib64/python2.6/json/decoder.pyc in JSONArray(match, context, _w)
    215     while True:
    216         try:
--> 217             value, end = iterscan(s, idx=end, context=context).next()
    218         except StopIteration:
    219             raise ValueError(errmsg("Expecting object", s, end))

/usr/lib64/python2.6/json/scanner.pyc in iterscan(self, string, idx, context)
     53             action = actions[m.lastindex]
     54             if action is not None:
---> 55                 rval, next_pos = action(m, context)
     56                 if next_pos is not None and next_pos != matchend:
     57                     # "fast forward" the scanner

/usr/lib64/python2.6/json/decoder.pyc in JSONObject(match, context, _w)
    183             value, end = iterscan(s, idx=end, context=context).next()
    184         except StopIteration:
--> 185             raise ValueError(errmsg("Expecting object", s, end))
    186         pairs[key] = value
    187         end = _w(s, end).end()

ValueError: Expecting object: line 274 column 46 (char 10553)

The nifti file is otherwise loaded when dcmstack is not in the library path. It is also opened in any other software.
Any idea about what could cause this?
Thanks.

dcmstack fails to convert certain sets of DICOM files

I'm trying to convert this dataset - https://dl.dropbox.com/u/412959/dicoms.tar.gz Unfortunately I get an error:

---------------------------------------------------------------------------
InvalidDicomError                         Traceback (most recent call last)
/Users/filo/<ipython-input-10-9c30552a7407> in <module>()
      2 for src_path in glob.glob("*.dcm"):
      3     print src_path
----> 4     src_dcm = dicom.read_file(src_path)
      5     stack.add_dcm(src_dcm)

/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/dicom/filereader.pyc in read_file(fp, defer_size, stop_before_pixels, force)
    623     try:
    624         dataset = read_partial(fp, stop_when, defer_size=defer_size,
--> 625                                             force=force)
    626     finally:
    627         if not caller_owns_file:

/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/dicom/filereader.pyc in read_partial(fileobj, stop_when, defer_size, force)
    529     """
    530     # Read preamble -- raise an exception if missing and force=False

--> 531     preamble = read_preamble(fileobj, force)
    532     file_meta_dataset = Dataset()
    533     # Assume a transfer syntax, correct it as necessary


/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/dicom/filereader.pyc in read_preamble(fp, force)
    502             fp.seek(0)
    503         else:
--> 504             raise InvalidDicomError("File is missing 'DICM' marker. "
    505                                     "Use force=True to force reading")
    506     else:

InvalidDicomError: File is missing 'DICM' marker. Use force=True to force reading

But when I try to force conversion I get the another error:

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
/Users/filo/<ipython-input-11-11b6ef0c3691> in <module>()
      3     print src_path
      4     src_dcm = dicom.read_file(src_path, force=True)
----> 5     stack.add_dcm(src_dcm)

/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/dcmstack-0.6.1-py2.7.egg/dcmstack/dcmstack.pyc in add_dcm(self, dcm, meta)
    525         nii_wrp = None
    526         if not is_dummy:
--> 527             nii_wrp = NiftiWrapper.from_dicom_wrapper(dw, meta)
    528             if self._ref_input is None:
    529                 #We don't have a reference input yet, use this one


/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/dcmstack-0.6.1-py2.7.egg/dcmstack/dcmmeta.pyc in from_dicom_wrapper(klass, dcm_wrp, meta_dict)
   1487 
   1488         #The Nifti patient space flips the x and y directions

-> 1489         affine = np.dot(np.diag([-1., -1., 1., 1.]), dcm_wrp.get_affine())
   1490 
   1491         #Make 2D data 3D


/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/nibabel-1.3.0-py2.7.egg/nibabel/nicom/dicomwrappers.pyc in get_affine(self)
    282         # direction cosine for changes in row index, column 1 is

    283         # direction cosine for changes in column index

--> 284         orient = self.rotation_matrix
    285         # therefore, these voxel sizes are in the right order (row,

    286         # column, slice)


/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/nibabel-1.3.0-py2.7.egg/nibabel/onetime.pyc in __get__(self, obj, type)
     41            return self.getter
     42 
---> 43        val = self.getter(obj)
     44        #print "** setattr_on_read - loading '%s'" % self.name  # dbg

     45        setattr(obj, self.name, val)

/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/nibabel-1.3.0-py2.7.egg/nibabel/nicom/dicomwrappers.pyc in rotation_matrix(self)
    165         assert np.allclose(np.eye(3),
    166                            np.dot(R, R.T),
--> 167                            atol=1e-6)
    168         return R
    169 

AssertionError: 

At the same time dcm2nii is able to convert this dataset with the following result https://dl.dropbox.com/u/412959/20101222_131954BTfMRIcontrolB4s004a1001.nii.gz I hope this is something minor since I love using dcmstack. Thanks in advance for any help!

parse_and_stack output (dcmstack object)

@moloney a while ago you recommended the parse_and_stack function for automatically stacking dicons with the same echo time together. I am trying to use this function, but I have no idea what to do with its output.

This:

from os import listdir
from dcmstack import parse_and_stack
import numpy as np

mydir = "/home/chymera/data/dc.rs/export_ME/dicom/4457/1/EPI/"

print(mydir)
filelist = listdir(mydir)
myfiles = [mydir+myfile for myfile in filelist]
results = parse_and_stack(myfiles)
print(results)
print(np.shape(results))
print(type(results))

prints:

/home/chymera/data/dc.rs/export_ME/dicom/4457/1/EPI/
{(u'2.16.756.5.5.100.9223372038926637660.20904.1423396754.1', 50001, u'T2S_EP_Feb2015_multi', (-0.9999965999, -0.002201215161, -0.001398168101, -0.002201267133, 0.9999975766, 3.563401135e-05)): <dcmstack.dcmstack.DicomStack object at 0x7fb3df4cd450>}
()
<type 'dict'>
[Finished in 561.182s]

what can I do with that dict? It seems to me, the function creates no files. What other function do I need to pass the dict to, to get the files I want?

Bug to convert philips DTI dcm files

Hello:
Thanks for your work, but I maybe have a very stupid question: your convertor works for all the machines or not? I tested with Siemens, GE, Philips, but I have problem to convert Philips:

I got errors like this, because im not familiar with dicom, so I can not figure out the bug from the python source code:

Traceback (most recent call last): File "/Users/junhao.wen/Hao/Code/Python/Economy/test_dcmstack.py", line 13, in <module> stack_data = my_stack.get_data() File "build/bdist.macosx-10.6-x86_64/egg/dcmstack/dcmstack.py", line 763, in get_data File "build/bdist.macosx-10.6-x86_64/egg/dcmstack/dcmstack.py", line 726, in get_shape dcmstack.dcmstack.InvalidStackError: The DICOM stack is not valid: Unable to guess key for sorting the fourth dimension

PS: I also tried the command line, it gave the same errors.
Really look forward to the reply:)

Good day

Hao

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.