Giter VIP home page Giter VIP logo

multiecho's Introduction

Multi-echo combinations

PyPI version PyPI - Python Version

MRI data acquisitions can involve multiple volumes acquired at different echo times. Typically, subsequent processing pipelines assume data to be acquired at a single echo time. This repository provides a command line tool to combine multiple echoes from a multi-echo (BOLD f)MRI acquisition. It currently provides three different echo avering algorithms:

algorithm description
1. average Echoes are weighted equally
2. PAID Echoes are weighted by their CNR, i.e. by their TE*tSNR contributions (BOLD fMRI data only)
3. TE Echoes are weighted by their TEs

For more information on multiecho acquisition and combination schemes, please refer to (for example):

  • Poser et al. (2006). BOLD Contrast Sensitivity Enhancement and Artifact Reduction with Multiecho EPI: Parallel-Acquired Inhomogeneity- Desensitized fMRI. Magn. Reson. Med. 55:6, pp. 1227–35.
  • Posse, Stefan (2012). Multi-Echo Acquisition. NeuroImage 62:2, pp. 665–671.

Multiecho has been developed at the Donders Institute of the Radboud University.

Installation

To install, simply run:

pip install multiecho

This will give you the latest stable release of the software. To get the very latest (possibly unreleased) version of the software you can install the package directly from the Github source code repository:

pip install git+https://github.com/Donders-Institute/multiecho

Alternatively, clone this repository and run the following on the root folder of the repository:

pip install .

The tool supports Python 3.6+.

Usage

Once installed, a command line tool called mecombine should be available in your PATH. Detailed usage information can be found by running mecombine -h:

usage: mecombine [-h] [-o OUTPUTNAME] [-a {PAID,TE,average}]
                      [-w [WEIGHTS [WEIGHTS ...]]] [-s] [-v VOLUMES]
                      pattern

Combine multi-echo echoes.

Tools to combine multiple echoes from an fMRI acquisition.
It expects input files saved as NIfTIs, preferably organised
according to the BIDS standard.

Currently three different combination algorithms are supported, implementing
the following weighting schemes:

1. PAID => TE * SNR
2. TE => TE
3. Simple Average => 1

positional arguments:
  pattern               Globlike search pattern with path to select the echo
                        images that need to be combined. Because of the
                        search, be sure to check that not too many files are
                        being read

optional arguments:
  -h, --help            show this help message and exit
  -o OUTPUTNAME, --outputname OUTPUTNAME
                        File output name. If not a fullpath name, then the
                        output will be stored in the same folder as the input.
                        If empty, the output filename will be the filename of
                        the first echo appended with a '_combined' suffix
                        (default: )
  -a {PAID,TE,average}, --algorithm {PAID,TE,average}
                        Combination algorithm. Default: TE (default: TE)
  -w [WEIGHTS [WEIGHTS ...]], --weights [WEIGHTS [WEIGHTS ...]]
                        Weights (e.g. = echo times) for all echoes (default:
                        None)
  -s, --saveweights     If passed and algorithm is PAID, save weights
                        (default: False)
  -v VOLUMES, --volumes VOLUMES
                        Number of volumes that is used to compute the weights
                        if algorithm is PAID (default: 100)

examples:
  mecombine '/project/number/bids/sub-001/func/*_task-motor_*echo-*.nii.gz'
  mecombine '/project/number/bids/sub-001/func/*_task-rest_*echo-*.nii.gz' -a PAID
  mecombine '/project/number/bids/sub-001/func/*_acq-MBME_*run-01*.nii.gz' -w 11 22 33 -o sub-001_task-stroop_acq-mecombined_run-01_bold.nii.gz

Caveats

Currently, the echo combination is resource hungry as we load all datasets into memory at once. We could iterate through the volumes and only keep the final combined series in memory at any given time.

You may receive a runtime warning (invalid value encountered in true_divide) when combining echoes with PAID. If your datasets have voxels with zeros, e.g., if they were masked, a division by 0 will lead to infinite weights. You may safely ignore the warning, but do check your data after the combination.

By default, PAID will compute the weights based on the last 100 volumes of the acquisition. Whether this is optimal or not is up to discussion. If you are testing out the combination on a small subset of volumes, say 5 or so, then the weights won't be stable and your image may look noisy.

multiecho's People

Contributors

dangom avatar jorryttichelaar avatar marcelzwiers avatar musicinmybrain avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

multiecho's Issues

Add options for smoothing

When using the PAID weighting scheme, the echo weights should be smoothed before echo combination.
Explain why and add a cli parameter that accepts the smoothing kernel.

Implement a logging module

In order to keep the provenance of the data retractable, it would be good to log all image operations (e.g. compatible with the BIDScoin logger)

Make the interface more like that of a BIDS app

We could use pybids instead of a glob search pattern and use the standard input arguments of BIDS apps. Here is an example of such a BIDS app interface:

This App has the following command line arguments:

	usage: run.py [-h]
	              [--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]
	              bids_dir output_dir {participant,group}

	Example BIDS App entry point script.

	positional arguments:
	  bids_dir              The directory with the input dataset formatted
	                        according to the BIDS standard.
	  output_dir            The directory where the output files should be stored.
	                        If you are running a group level analysis, this folder
	                        should be prepopulated with the results of
	                        the participant level analysis.
	  {participant,group}   Level of the analysis that will be performed. Multiple
	                        participant level analyses can be run independently
	                        (in parallel).

	optional arguments:
	  -h, --help            show this help message and exit
	  --participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]
	                        The label(s) of the participant(s) that should be
	                        analyzed. The label corresponds to
	                        sub-<participant_label> from the BIDS spec (so it does
	                        not include "sub-"). If this parameter is not provided
	                        all subjects will be analyzed. Multiple participants
	                        can be specified with a space separated list.

Improve code formatting

Perhaps to use yapf (with shared settings), black for code formatting or some other code formatter.

The sole test does not seem to work

python3.11 -m venv _e
. _e/bin/activate
pip install -e .
pip install pytest
python -m pytest
============================= test session starts ==============================
platform linux -- Python 3.11.3, pytest-7.4.0, pluggy-1.2.0
rootdir: /home/ben/src/multiecho
collected 1 item                                                               

tests/test_load_me_data.py F                                             [100%]

=================================== FAILURES ===================================
_________________________ MyUnitTest.test_load_me_data _________________________

self = <tests.test_load_me_data.MyUnitTest testMethod=test_load_me_data>

    def test_load_me_data(self):
>       with patch('multiecho.combination.logger') as log_mock:

tests/test_load_me_data.py:8: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/lib64/python3.11/unittest/mock.py:1437: in __enter__
    original, local = self.get_original()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <unittest.mock._patch object at 0x7f2373b00290>

    def get_original(self):
        target = self.getter()
        name = self.attribute
    
        original = DEFAULT
        local = False
    
        try:
            original = target.__dict__[name]
        except (AttributeError, KeyError):
            original = getattr(target, name, DEFAULT)
        else:
            local = True
    
        if name in _builtins and isinstance(target, ModuleType):
            self.create = True
    
        if not self.create and original is DEFAULT:
>           raise AttributeError(
                "%s does not have the attribute %r" % (target, name)
            )
E           AttributeError: <module 'multiecho.combination' from '/home/ben/src/multiecho/multiecho/combination.py'> does not have the attribute 'logger'

/usr/lib64/python3.11/unittest/mock.py:1410: AttributeError
=========================== short test summary info ============================
FAILED tests/test_load_me_data.py::MyUnitTest::test_load_me_data - AttributeError: <module 'multiecho.combination' from '/home/ben/src/multiec...
============================== 1 failed in 0.24s ===============================

(Using an older Python version such as 3.9 or 3.6 does not make a difference.)

Perhaps this test is just a placeholder, and not intended to be usable?

Add documentation for general users

Instead of simply referring to the usage of the tool, document what the purpose of multiecho is and how to adapt the echo combination to your specific workflow.

Make the output BIDS compatible

  • Create sidecar files for the combined images (optionally delete the individual images)
  • Update the IntendedFor field in the fieldmap json files

Realign the images for the PAID method

Potentially, motion adds variance to the data that could partially be removed by realignment. So the computation of the PAID weight maps should ideally be done on the realigned data. The weights can then be mapped back to and applied in unrealigned space.

Interest in maintaining a man page?

I’m preparing to package this for Fedora Linux as a dependency for https://github.com/Donders-Institute/bidscoin.

We always like to have man pages for executables when possible. Sometimes help2man can generate something decent. In this case, the result would be mostly legible, but it doesn’t do a great job with the examples. I plan to use the --help output of mecombine to write a man page in groff_man(7) format by hand.

Is there any interest in keeping such a man page upstream? The Python ecosystem doesn’t really have a way to install man pages, but shipping a man page in the sdist would make it available to other distribution packagers, at least.

If this is of interest, I’m happy to offer a PR adding a man page mecombine.1 in the top-level directory and adding it to MANIFEST.in. Otherwise, I’ll just maintain the man page downstream.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.