Giter VIP home page Giter VIP logo

sendit's Introduction

unit-tests type-hints doc-build test-coverage Python version PyPI version DOI

pydicom

pydicom is a pure Python package for working with DICOM files. It lets you read, modify and write DICOM data in an easy "pythonic" way. As a pure Python package, pydicom can run anywhere Python runs without any other requirements, although if you're working with Pixel Data then we recommend you also install NumPy.

Note that pydicom is a general-purpose DICOM framework concerned with reading and writing DICOM datasets. In order to keep the project manageable, it does not handle the specifics of individual SOP classes or other aspects of DICOM. Other libraries both inside and outside the pydicom organization are based on pydicom and provide support for other aspects of DICOM, and for more specific applications.

Examples are pynetdicom, which is a Python library for DICOM networking, and deid, which supports the anonymization of DICOM files.

Installation

Using pip:

pip install pydicom

Using conda:

conda install -c conda-forge pydicom

For more information, including installation instructions for the development version, see the installation guide.

Documentation

The pydicom user guide, tutorials, examples and API reference documentation is available for both the current release and the development version on GitHub Pages.

Pixel Data

Compressed and uncompressed Pixel Data is always available to be read, changed and written as bytes:

>>> from pydicom import dcmread
>>> from pydicom.data import get_testdata_file
>>> path = get_testdata_file("CT_small.dcm")
>>> ds = dcmread(path)
>>> type(ds.PixelData)
<class 'bytes'>
>>> len(ds.PixelData)
32768
>>> ds.PixelData[:2]
b'\xaf\x00'

If NumPy is installed, Pixel Data can be converted to an ndarray using the Dataset.pixel_array property:

>>> arr = ds.pixel_array
>>> arr.shape
(128, 128)
>>> arr
array([[175, 180, 166, ..., 203, 207, 216],
       [186, 183, 157, ..., 181, 190, 239],
       [184, 180, 171, ..., 152, 164, 235],
       ...,
       [906, 910, 923, ..., 922, 929, 927],
       [914, 954, 938, ..., 942, 925, 905],
       [959, 955, 916, ..., 911, 904, 909]], dtype=int16)

Decompressing Pixel Data

JPEG, JPEG-LS and JPEG 2000

Converting JPEG, JPEG-LS or JPEG 2000 compressed Pixel Data to an ndarray requires installing one or more additional Python libraries. For information on which libraries are required, see the pixel data handler documentation.

RLE

Decompressing RLE Pixel Data only requires NumPy, however it can be quite slow. You may want to consider installing one or more additional Python libraries to speed up the process.

Compressing Pixel Data

Information on compressing Pixel Data using one of the below formats can be found in the corresponding encoding guides. These guides cover the specific requirements for each encoding method and we recommend you be familiar with them when performing image compression.

JPEG-LS, JPEG 2000

Compressing image data from an ndarray or bytes object to JPEG-LS or JPEG 2000 requires installing the following:

RLE

Compressing using RLE requires no additional packages but can be quite slow. It can be sped up by installing pylibjpeg with the pylibjpeg-rle plugin, or gdcm.

Examples

More examples are available in the documentation.

Change a patient's ID

from pydicom import dcmread

ds = dcmread("/path/to/file.dcm")
# Edit the (0010,0020) 'Patient ID' element
ds.PatientID = "12345678"
ds.save_as("/path/to/file_updated.dcm")

Display the Pixel Data

With NumPy and matplotlib

import matplotlib.pyplot as plt
from pydicom import dcmread
from pydicom.data import get_testdata_file

# The path to a pydicom test dataset
path = get_testdata_file("CT_small.dcm")
ds = dcmread(path)
# `arr` is a numpy.ndarray
arr = ds.pixel_array

plt.imshow(arr, cmap="gray")
plt.show()

Contributing

We are all volunteers working on pydicom in our free time. As our resources are limited, we very much value your contributions, be it bug fixes, new core features, or documentation improvements. For more information, please read our contribution guide.

If you have examples or extensions of pydicom that don't belong with the core software, but that you deem useful to others, you can add them to our contribution repository: contrib-pydicom.

sendit's People

Contributors

charlesgueunet avatar vsoch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sendit's Issues

Updates for sendit development verison 2.0

Sendit Base

We would want to be able to quickly deploy the main application Dockerfile, or entirely different tools with the same google APIs, off of this base. Then the Dockerfile for sendit can simply update the (regularly changing) libraries:

FROM pydicom/sendit-base

# update deid
WORKDIR /opt
RUN git clone -b development https://github.com/pydicom/deid
WORKDIR /opt/deid
RUN python setup.py install

# som
WORKDIR /opt
RUN git clone https://github.com/vsoch/som
WORKDIR /opt/som
RUN python setup.py install

WORKDIR /code
ADD . /code/
CMD /code/run_uwsgi.sh

EXPOSE 3031

No targuzz

  • Images should be represented on the level of dicoms
    This makes a lot of sense to me in terms of metadata - we want to represent metadata about images, not about zipped up things that need to be unzipped first. We can also very easily view a dicom in a browser from a url, and this isn't the case with .tar.gz (unless it's another format like nifti).

User Friendly Config File

If we can see some day being able to deploy a sendit instance for a researcher, the configuration needs to be easy and stupid. The harder part is generation of the deid recipe, but for the rest of it, it should come down to reading a file that gets integrated into their custom build and then drives the application. It might also make sense to represent the config in the database as a model, that way one instance can have several (and the input folders for each are defined when created) and changes can be made without stopping/restarting the application.

Som BigQuery Client

  • Implement bigquery client into som-tools, use for metadata
    We would want to use BigQuery instead of Datastore. This is ready to go and needs testing.

Testing

I want to do the following tests to generally get a "move images" and "move metadata" strategy. It comes down to testing batched uploads (in sync), batched uploads (separate images from metadata) vs. rsynch (more risky but a lot faster according to others).

  • Test speed with bigquery + metadata + storage
  • Test speed with caching metadata + storage
  • If time still slow, investigate rsync

Changes for Dasher

  • changes to dasher endpoint (session?)
    I'll leave this to Susan to ping me when we absolutely need changes.

Note - this is still a Stanford hosted server, without PHI on cloud

Getting sporadic "Received unexpected C-FIND service message" when issuing multiple C-FINDs in a row

I'm new to pydicom so more likely than not I'm doing something wrong. I have a script that will iterate over a list of series instance UIDs and issue a SERIES-level C-FIND against a DICOM SCP. Every so often (but quite often) I get an error on the screen stating:

WARNING [2023-10-26 18:24:00,572]: Received unexpected C-FIND service message

I can't pinpoint from which series it's coming, and I have tried a few that should be triggering this but when I issue just one C-FIND I don't get that warning.

This is the whole code, it just runs over and over a list of ser uids:

    def cfind(self, series_uid):
        # debug_logger()
        ae = AE()
        ae.ae_title = self.calling_ae
        ae.add_requested_context(StudyRootQueryRetrieveInformationModelFind)

        # Associate with the peer AE at IP 127.0.0.1 and port 11112
        logging.getLogger().setLevel(logging.FATAL)
        assoc = ae.associate(self.ip, self.port, ae_title=self.called_ae)
        if assoc.is_established:
            # Send the C-FIND request

            # Create our Identifier (query) dataset
            ds = Dataset()
            ds.add_new(0x0020000E, 'UI', series_uid)
            ds.add_new(0x0020000D, 'UI', '')
            ds.QueryRetrieveLevel = 'SERIES'

            ds.add_new(0x00100010, 'PN', '')  # Patient Name
            ds.add_new(0x00100020, 'LO', '')  # MRN
            ds.add_new(0x00080060, 'CS', '')  # Modality
            ds.add_new(0x00080050, 'SH', '')  # Acc#
            ds.add_new(0x00081030, 'LO', '')  # Study description
            ds.add_new(0x0008103E, 'LO', '')  # Series description

            responses = assoc.send_c_find(ds, StudyRootQueryRetrieveInformationModelFind)
            dicom_result = None
            for (status, identifier) in responses:
                if status and status.Status in [0xff00, 0xff01]:
                    try:
                        logging.info("MRN:\t\t\t" + identifier.PatientID)
                        logging.info("PatientName:\t" + str(identifier.PatientName))
                        logging.info("Study desc:\t\t" + identifier.StudyDescription)
                        logging.info("Series desc:\t\t" + identifier.SeriesDescription)
                        logging.info("Acc#:\t\t\t" + identifier.AccessionNumber)
                        logging.info("Study Ins UID:\t\t\t" + identifier.StudyInstanceUID)
                        logging.info("Series Ins UID:\t\t\t" + identifier.SeriesInstanceUID)
                        dicom_result = DicomResult(MRN=identifier.PatientID,
                                                   PatientName=identifier.PatientName,
                                                   SeriesDescription=identifier.SeriesDescription,
                                                   StudyDescription=identifier.StudyDescription,
                                                   AccessionNumber=identifier.AccessionNumber,
                                                   SeriesInstanceUID=identifier.SeriesInstanceUID,
                                                   StudyInstanceUID=identifier.StudyInstanceUID)
                        break
                    except AttributeError as ae:
                        logging.error('C-FIND query status: 0x{0:04X}'.format(status.Status))
                        logging.error(f"Got error when trying to extract elements from C-FIND response: {ae} - SUID: {study_uid}")
                elif status and status.Status in [0x0000]:
                    # Final response with no particular data. PHS PACS responds with this at the end of a C-FIND.
                    break
                else:
                    raise ValueError('Connection timed out, was aborted or received invalid response.\nStatus: 0x{0:04X}'.format(status.Status))

            # Release the association
            assoc.release()
            logging.getLogger().setLevel(logging.INFO)
            return dicom_result
        else:
            logging.getLogger().setLevel(logging.INFO)
            raise ValueError(f'Association rejected, aborted or never connected - parameters {self.__str__()}')

I don't see any errors or data missing, but that just warnings sent out to stdout/stderr (not sure which - didn't check).

Any ideas?

reorganize models of Study/Session into a general Batch

while the images have attributes for Study and Session, it doesn't make sense to model them from the application's stand point. The model should be redone to have a single Batch, indicative of a folder of images, each of which is represented as an Image (still) in the database. The workers would then pass around batch ids instead of the list of images ids.

[enhancement] think of possible config file for user to define tasks, dicom headers

in the future, we would want the user to have more power to control tasks, and processing, and (given that the action to get the files lives outside the application) we should read this from some kind of config file, otherwise go to defaults. A better idea might be to have the user register the C-MOVE in the application, and specify these things when setting that up.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.