Giter VIP home page Giter VIP logo

mne-tools / mne-bids Goto Github PK

View Code? Open in Web Editor NEW
122.0 122.0 81.0 64.87 MB

MNE-BIDS is a Python package that allows you to read and write BIDS-compatible datasets with the help of MNE-Python.

Home Page: https://mne.tools/mne-bids/

License: BSD 3-Clause "New" or "Revised" License

Python 98.01% Makefile 0.17% TeX 1.68% Jinja 0.14%
bids eeg electroencephalography ieeg magnetoencephalography meg mne neuroimaging neuroscience

mne-bids's People

Contributors

a-hurst avatar adam2392 avatar agramfort avatar aksoo avatar cbrnr avatar choldgraf avatar dependabot[bot] avatar dnacombo avatar dominikwelke avatar eort avatar ezemikulan avatar guiomar avatar hoechenberger avatar jasmainak avatar kalenkovich avatar ktavabi avatar laemtl avatar larsoner avatar monkeyman192 avatar mscheltienne avatar pre-commit-ci[bot] avatar richardkoehler avatar rob-luke avatar romquentin avatar sappelhoff avatar sophieherbst avatar teonbrooks avatar tomdonoghue avatar wmvanvliet avatar yjmantilla avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mne-bids's Issues

make_dataset_description uses only string but should accept array (list)

mne_bids.utils.make_dataset_description accepts only strings for the input fields, however in the BIDS dataset_description.json, a few fields should be arrays, such as the authors, see this (correct) example:

{
  "Name": "Matching Pennies",
  "BIDSVersion": "1.1.1, BEP006(EEG) as of 20th of June, 2018",
  "License": "ODbL (https://opendatacommons.org/licenses/odbl/summary/)",
  "Authors": [
    "Stefan Appelhoff",
    "Daryl Sauer",
    "Suleman Gill"
  ],
  "Acknowledgements": "We thank Daniel Myklody for general guidance and help in setting up the recording site.",
  "HowToAcknowledge": "We hope that all users of this dataset will cite the citation listed under the ReferencesAndLinks field.",
  "ReferencesAndLinks": [
    "Appelhoff, S., Sauer, D., & Gill, S. S. (2018, May 25). Matching Pennies: A Brain Computer Interface Implementation. Retrieved from osf.io/cj2dr"
  ]
}

Violation of this will yield the following error by the BIDS-validator:

	1: Invalid JSON file. The file is not formatted according the schema. (code: 55 - JSON_SCHEMA_VALIDATION_ERROR)
		./dataset_description.json
			Evidence: .Authors should be array

See the requirements of the BIDS-validator concerning dataset_description.json here.

Issues with overriding

The override=True parameter for raw_to_bids is a bit problematic.
By default it will delete another folder of bids data if it is a different acquisition, but same session.
I get that it is meant to be used to allow data to be well... overridden, however it has unintended side-effects.
For files the effect is fine and works as intended.
I think override should either be False by default (seems unnecessary to have to specify something like that), or make some kind of check to see if any parameters other than session and subject are provided, and if so set the folder override to False (otherwise any data with different task, acq, or any other optional parameter with same session/subject will be deleted)

Enable BTi handling

MEG data in the BTi format needs a lot of work in the BIDS environment.

  1. It needs to be supported by the BIDS-validator, see: https://github.com/INCF/bids-validator/issues/553

  2. We need a working example properly formatted in BIDS to be able to compare against it: bids-standard/bids-examples#121

  3. We need to properly copy BTi data within raw_to_bids (e.g., by writing a copyfile_bti function, but not necessarily)

Once we have that, we can fix this:

# FIXME: see these issues for reference:
# https://github.com/mne-tools/mne-bids/pull/84
# https://github.com/INCF/bids-validator/issues/553
with pytest.raises(subprocess.CalledProcessError):
cmd = ['bids-validator', output_path]
run_subprocess(cmd, shell=shell)

pytest fails for BTI data

When running pytest mne_bids/tests/test_mne_bids.py::test_bti --verbose, I always get the error as shown below. I have traced it a bit by inserting a print statement and so I found out that it's because the following data path does not exist on my system:

/home/stefanappelhoff/miniconda3/envs/mne/lib/python3.6/site-packages/mne/io/bti/tests/data/test_pdf_linux

My directory goes only until /home/stefanappelhoff/miniconda3/envs/mne/lib/python3.6/site-packages/mne/io/bti/tests ... thus it stops short.

which contains:

.
├── __init__.py
├── __pycache__
│   ├── __init__.cpython-36.pyc
│   └── test_bti.cpython-36.pyc
└── test_bti.py

1 directory, 4 files

However, inspecting the error we see that for BTI data, the following happens:

        # BTi systems
        elif ext == '.pdf':
            print(raw_fname)  # I inserted this for debugging
            if os.path.isfile(raw_fname):
                raw = io.read_raw_bti(raw_fname, config_fname=config,
                                      head_shape_fname=hsp,
                                      preload=False, verbose=verbose)

Due to the if os.path.isfile(raw_fname): evaluating to FALSE every time, I get

E UnboundLocalError: local variable 'raw' referenced before assignment.

Any ideas how to fix this?

PS, I get a similar error for test_kit, which tells me:

E FileNotFoundError: [Errno 2] No such file or directory: '/home/stefanappelhoff/miniconda3/envs/mne/lib/python3.6/site-packages/mne/io/kit/tests/data/test.sqd'


Error commencing

 pytest mne_bids/tests/test_mne_bids.py::test_bti --verbose
Test session starts (platform: linux, Python 3.6.6, pytest 3.6.2, pytest-sugar 0.9.1)
cachedir: .pytest_cache
rootdir: /home/stefanappelhoff/Desktop/mne-bids, inifile:
plugins: sugar-0.9.1, faulthandler-1.5.0, cov-2.5.1


――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― test_bti ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

    def test_bti():
        """Test functionality of the raw_to_bids conversion for BTi data."""
        output_path = _TempDir()
        data_path = op.join(base_path, 'bti', 'tests', 'data')
        raw_fname = op.join(data_path, 'test_pdf_linux')
        config_fname = op.join(data_path, 'test_config_linux')
        headshape_fname = op.join(data_path, 'test_hs_linux')
    
        raw_to_bids(subject_id=subject_id, session_id=session_id, run=run,
                    task=task, raw_file=raw_fname, config=config_fname,
                    hsp=headshape_fname, output_path=output_path,
>                   verbose=True, overwrite=True)

mne_bids/tests/test_mne_bids.py:116: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
mne_bids/mne_bids.py:425: in raw_to_bids
    config=config, verbose=verbose)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

raw_fname = '/home/stefanappelhoff/miniconda3/envs/mne/lib/python3.6/site-packages/mne/io/bti/tests/data/test_pdf_linux', electrode = None
hsp = '/home/stefanappelhoff/miniconda3/envs/mne/lib/python3.6/site-packages/mne/io/bti/tests/data/test_hs_linux', hpi = None
config = '/home/stefanappelhoff/miniconda3/envs/mne/lib/python3.6/site-packages/mne/io/bti/tests/data/test_config_linux', verbose = True

    def _read_raw(raw_fname, electrode=None, hsp=None, hpi=None, config=None,
                  verbose=None):
        """Read a raw file into MNE, making inferences based on extension."""
        fname, ext = _parse_ext(raw_fname)
    
        # MEG File Types
        # --------------
        # KIT systems
        if ext in ['.con', '.sqd']:
            raw = io.read_raw_kit(raw_fname, elp=electrode, hsp=hsp,
                                  mrk=hpi, preload=False)
    
        # Neuromag or converted-to-fif systems
        elif ext in ['.fif', '.gz']:
            raw = io.read_raw_fif(raw_fname, preload=False)
    
        # BTi systems
        elif ext == '.pdf':
            print(raw_fname)
            if os.path.isfile(raw_fname):
                raw = io.read_raw_bti(raw_fname, config_fname=config,
                                      head_shape_fname=hsp,
                                      preload=False, verbose=verbose)
    
        # CTF systems
        elif ext == '.ds':
            raw = io.read_raw_ctf(raw_fname)
    
        # EEG File Types
        # --------------
        # BrainVision format by Brain Products, expects also a .eeg and .vmrk file
        elif ext == '.vhdr':
            raw = io.read_raw_brainvision(raw_fname)
    
        # EDF (european data format) or BDF (biosemi) format
        elif ext == '.edf' or ext == '.bdf':
            raw = io.read_raw_edf(raw_fname)
    
        # EEGLAB .set format, if there is a separate .fdt file, it should be in the
        # same folder as the .set file
        elif ext == '.set':
            raw = io.read_raw_eeglab(raw_fname)
    
        # Neuroscan .cnt format
        elif ext == '.cnt':
            raw = io.read_raw_cnt(raw_fname)
    
        # No supported data found ...
        # ---------------------------
        else:
            raise ValueError("Raw file name extension must be one of %\n"
                             "Got %" % (ALLOWED_EXTENSIONS, ext))
>       return raw
E       UnboundLocalError: local variable 'raw' referenced before assignment

mne_bids/io.py:82: UnboundLocalError
------------------------------------------------------------------------------------------- Captured stdout call -------------------------------------------------------------------------------------------
/home/stefanappelhoff/miniconda3/envs/mne/lib/python3.6/site-packages/mne/io/bti/tests/data/test_pdf_linux

 mne_bids/tests/test_mne_bids.py::test_bti ⨯                                                                                                                                                 100% ██████████

Results (0.42s):
       1 failed
         - mne_bids/tests/test_mne_bids.py:105 test_bti

add tests

We need tests. Reminder for anyone who is bored

and also continuous integration

measurement date always tuple now

with mne-tools/mne-python#5500 the measurement date is always a tuple. This (correctly) messes up a few tests.
I saw that with 9ee36dd @sappelhoff added a fix in one spot, but I was thinking it might be good to have this fix pushed through separately so that other PR's can be rebased and continue without having to wait for #78
I can make a PR with the changes if you think it would be good to get it fixed ASAP (the goal is to keep this repo compatible with the master branch of mne-python anyway, not the public release right?)

Channel TSV

Since #1 is settled for now - I can close.

Here I'm curious why *_channel.tsv for sample data is only displaying sample/condition info? According to BEP008 3.2

*_channels.tsv: A channels .tsv file listing channel names, types, and other optional information.

I didn't see anything over at #4128, so it would be helpful to know how strictly we should adhere to BEP008 in this repo?

BUG: Not saving MEG data if format is not .fif AND path does not exist

# for FIF, we need to re-save the file to fix the file pointer
# for files with multiple parts
if ext in ['.fif', '.gz']:
raw.save(raw_file_bids, overwrite=overwrite)
else:
if os.path.exists(raw_file_bids):
if overwrite:
os.remove(raw_file_bids)
sh.copyfile(raw_file, raw_file_bids)
else:
raise ValueError('"%s" already exists. Please set overwrite to'
' True.' % raw_file_bids)
return output_path

This is an error that somehow passed through the tests: Checking the if-clauses, you save if:

  • it's .fif or .gz
  • if its any other MEG format AND the path exists AND overwrite=True

--> however, what if the format is NOT .fif/.gz and the path does not exist?

Read BIDS folders

Next step: support reading BIDS directory. We will need to walk over the directory and compile all the metadata. We might want a structured dictionary or a custom object that holds all of this and help coordinate reading a specific subject, task, session, etc.

cc: @jasmainak

memory issue in circleCI

we get this error:

Unexpected failing examples:
/home/circleci/project/examples/convert_ds000117.py failed leaving traceback:
Traceback (most recent call last):
  File "/home/circleci/project/examples/convert_ds000117.py", line 39, in <module>
    fetch_faces_data(data_path, repo, subject_ids)
  File "/home/circleci/project/mne_bids/datasets.py", line 38, in fetch_faces_data
    print_destination=True, resume=True, timeout=10.)
  File "<string>", line 2, in _fetch_file
  File "/home/circleci/miniconda/envs/mne_bids/lib/python3.6/site-packages/mne/utils.py", line 729, in verbose
    return function(*args, **kwargs)
  File "/home/circleci/miniconda/envs/mne_bids/lib/python3.6/site-packages/mne/utils.py", line 2008, in _fetch_file
    verbose_bool)
  File "/home/circleci/miniconda/envs/mne_bids/lib/python3.6/site-packages/mne/utils.py", line 1923, in _get_http
    local_file.write(chunk)
OSError: [Errno 12] Cannot allocate memory

reading in BIDS compatible datasets

Something like:

from mne-bids import read_raw_bids
raw = read_raw_bids(raw_fname, tsv_fnames=None, json_fnames=None)
# None would mean you just use the raw_fname to construct tsv_fname and json_fname

BIDS-iEEG in MNE-bids

Hey @jasmainak - I just chatted with @dorahermes and we'd like to figure out a path forward for incorporating BIDS-iEEG support within MNE-python.

I know you guys have been working on this on the MEG side of things. It looks like this repository is mostly for constructing a BIDS-compatible folder for the purposes of automating the conversion process. Is that correct?

Have you guys also been working on any I/O for MNE-python and BIDS? If so, I can loop in and do some work to get this functioning on the iEEG side of things.

[ENH] implement coordsystem.json and electrodes.tsv for (i)EEG

Currently the mne_bids._coordystem_json function is only working for MEG:

# TODO: Implement coordystem.json and electrodes.tsv for EEG and iEEG
if kind == 'meg':
_coordsystem_json(raw, unit, orient, manufacturer, coordsystem_fname,
verbose)

for EEG, the coordystem specifications are slightly different, see:

Also, an electrodes.tsv file is needed to then make use of the coordsystem in EEG:

Both should be added to MNE-BIDS.

BONUS: Make it work for iEEG as well ;-)

cc @choldgraf

doc updates

  • Add contributing.md and mention that we require bids-validator for the test to complete
  • add api.rst

update warnings in MNE

Currently MNE throws warnings if the raw file is not ending with -raw.fif. However, the new BIDS specification is not the same as the filenames end with _meg.fif now and sss or tsss go under _proc-tsss etc. We should update the warnings.

Channel info for KIT reference channels

_channel_tsv calls channel_type from io.pick, which returns ref_meg for KIT reference channels. Since this isn't in the dictionary of channel names and descriptions, it yields an error. Could a solution be to just add the corresponding info to the existing dictionaries of channel types and descriptions?

The new dictionaries could be something like:
map_chs = dict(grad='MEGGRAD', mag='MEGMAG', stim='TRIG', eeg='EEG',eog='EOG', ecg='ECG', misc='MISC',ref_meg = 'KITREF')

map_desc = dict(grad='Gradiometer', mag='Magnetometer',stim='Trigger', eeg='ElectroEncephaloGram', ecg='ElectroCardioGram', eog='ElectrOculoGram', misc='Miscellaneous', ref_meg='KITReference')

[BUG] overwrite param is overly aggressive

When we want to iteratively create a BIDS directory, overwrite will behave unexpectedly.

Example:

  • I have 2 runs of the same task (however, single session ... so session_id=None)
  • I load the data for run 1 and call raw_to_bids
  • Then I repeat the same process for run 2

... if in the example overwrite=True, then I will end up only having run 2, because run 1 gets deleted:

data_path = make_bids_folders(subject=subject_id, session=session_id,
kind=kind, root=output_path,
overwrite=overwrite,
verbose=verbose)
if session_id is None:
ses_path = data_path
else:
ses_path = make_bids_folders(subject=subject_id, session=session_id,
root=output_path,
overwrite=False,
verbose=verbose)

The issue: overwrite should only overwrite existing files, but it potentially removes whole directories with their contents (sh.rmtree), thereby discarding innocent data ;-)

Example file name

Is there a particular reason the example file is called 'plot_mne_sample'? is there a final intention to plot something?

Allowing for raw files to be automatically relocated

I assume the eventual scope of this library is to be able to allow for any raw files that are read to be relocated to an appropriate folder as per Section 5 (Appendix 1) of the BIDS MEG specification extension?
The raw_to_bids function as it stands doesn't produce the folder required to place the raw data in (but can be easily modified to do so).
As a follow on to this, I know it is a different library, but if the BaseRaw class in mne stored the paths to the various files that were used to generate it, then this information could be pulled out automatically by the raw_to_bids function instead of having to specify the files again.
Few other little notes:

  • Some files names (eg. the sidecar file _meg.json) aren't able to have the run etc info as part of their name.
  • "SoftwareFilters" should default to "none", not "n/a" in the _channel_json function.
  • 'MEGCoordinateSystem' should be the coordinate system, not manufacturer I believe (cf. 3.3.1 of BID MEG extension)
    (sorry for having a few misc things tacked on...)

STI 014 not needed for KIT data

Ok, last issue for now I think...
I understand that the STI 014 channel is created automatically for KIT data for the purpose of triggers, however I don't think it should be included in the channel list in the bids format as it isn't an actual channel.
Even if I specify the information to get triggers to work correctly and produce a correct looking event.tsv, the STI 014 channel is still there.
I am thinking we could just check if the data is KIT data and exclude the STI 014 channel from the counting/channel list? Or even just check if a STI 014 channel exists and ignore it if it does as I assume it will never be a channel we want to consider.

Fixing channels tsv

So as it stands the channels.tsv file doesn't quite produce the expected results.
As a comment in function that produces it indicates, the MEGGRAD value isn't even in the bids specification, with MEGGRADAXIAL and MEGGRADPLANAR being required to specify the actual gradiometer type.
Further, as I mentioned in the issue I raised for mne-python (mne-tools/mne-python#5311), the units of the KIT gradiometer channels are set to be T, which mne then uses to decide that it is a magnetometer channel, even though it isn't.
My initial thought of how to fix this would be to change the map_chs dictionary to be one that maps the coil_type value in the channel (eg. 'coil_type': 6001 in the KIT data I have) to the channel type.
Ie. something like
map_chs.update(meggradaxial=(FIFF.FIFFV_COIL_KIT_GRAD), ... etc)
The issue with this may be that it may be a bit cumbersome, and also we will need to know exactly what type of gradiometer etc all the various manufacturers use.
Again, I may well be overlooking some other value that is more suitable for correctly identifying the channel type.

setting DigitizedLandmarks/HeadPoints

At the moment both of these values are set as false by default.
Since they are both REQUIRED values I think it would be good if they could be set correctly and automatically.
For KIT data, you can have it so that when you specify something in the electrode or hsp arguments of raw_to_bids you can set the values correctly like so:

    extra_data["DigitizedLandmarks"] = (True if (electrode is not None and emptyroom is not True) else False)
    extra_data["DigitizedHeadPoints"] = (True if (hsp is not None and emptyroom is not True) else False)

Where this extra_data dictionary is passed to the requisite function.
Also, see #50 about the empty room variable above.

Both .elp and .hsp files (for KIT data) should also be copied to the working folder (see sec. 8.4.5 of the bids specification)
I currently have working code for KIT data, but this is one thing that will require a bit more work to get working for other formats.

consider adding generic _events.json

_events.json accompanies the events.tsv and describes its columns.

We can consider offering to write a minimal events.json:

{
  "onset": {
    "LongName": "onset",
    "Description": "Onset of the event",
    "Units": "seconds"
  },
  "duration": {
    "LongName": "duration",
    "Description": "Duration of the event",
    "Units": "seconds"
  }
}

Users can then add the other columns manually or with custom code --> but at least there would be a starting point.

This events.json could be a parameter in the main function raw_to_bids which defaults to False ... and if True, the default minimal file above is written.

What do you think?

Handling empty room data

It would be nice to be able to handle data that is specified to be as empty room data automatically.
To do this I propose adding an extra argument to raw_to_bids. Something along the lines of (this is the docstring for it):

"""
    emptyroom : False | bool | string
        Whether or not the supplied file is for the empty room measurement
        If False we do nothing.
        If True the file is used as an empty room file and handled appropriately
        If a path is specified it is assumed that this file is the empty room file path
"""

If the value is True then it will process the file in such a way as to generate all the empty room data (also see code snippet in #48 ).
The data for the recording time can all be extracted from the raw files, so it can all be done automatically.

The path specified is to allow for the AssociatedEmptyRoom value in the sidecar .json to be set.
I couldn't think of a better way to specify this value than setting it as a string here. It's reusing the argument however for two (similar) but different objectives. This is also the only part of this that I don't think can really be done automatically (unless you add another argument like 'has_emptyroom' which essentially generates the empty room path. Not sure whether it would be better to get this to happen externally or not...)

Adding extra information to the sidecar .json

There are a number of parameters in the sidecar json file that could be good to have.
These include:

  • Institution name
  • Manufacturers model name
  • Device serial number
  • Dewar position

At the moment I am adding these parameters by using an extra_data dictionary that can just contain some arbitrary extra data to add to the sidecar (and other files)
ie. my current docstring for it:

    extra_data : dictionary
        A dictionary containing any extra information required to populate the
        json files.
        Currently supported keys are:
        'InstitutionName', 'ManufacturersModelName','DewarPosition',
        'Name' (Name of the project), 'DeviceSerialNumber'

I understand this is a pretty hacky solution though and would like something possibly a bit more robust.
The problem is that this information isn't necessarily stored within any of the actual files being processed by mne. (serial number is in the KIT .con files, I don't know about other data...)

There are a few other parameters that can be added but I will make separate issues for them as they have slightly different scope...

Keeping track of all (req + optional) metadata?

Hey guys - question for you as I'm working through the details of BIDS-iEEG right now. Have you thought about how to keep track of all the metadata fields that are possible in a BIDS structure?

I was thinking of writing a function that could generate blank BIDS templates that'd basically have a bunch of JSON files etc where users would need to fill in the blanks or delete fields as necessary.

Could be a useful way of defining the format at any moment in time too (so it's more machine-readable instead of being in a google doc). Any thoughts on that ?

ImportError: cannot import name '_check_load_mat'

Hi, I am attempting to use mne-bids for the first time.

The command 'import mne_bids' throws the error:
ImportError: cannot import name '_check_load_mat'

I am using python3.6 with anaconda3.

Any help is appreciated. Thanks
-Nick

make mne-report bids aware

This is something me and @teonbrooks discussed before.

mne-report could make use of the bids structure for quality assurance. Here are some things that come to mind:

  1. Evoked butterfly plots for different conditions
  2. raw.plot with bad channels marked in red
  3. plot_events
  4. plot cov
  5. Display meta information in sidecar files (events.tsv, channels.tsv and _meg.json)
  6. Show the stimuli files (images for example)
  7. Show fiducial locations

Feel free to edit this description to add more to the list

writing big files to be BIDS compatible

Right now files > 2GB are written as filename-raw.fif and filename-raw-1.fif. However, this needs to be made BIDS aware. That is write filename_part-01_meg.fif and filename_part-02_meg.fif

EEGLAB: allow to copy .fdt files and update the pointer in corresponding .set

EEGLAB files are MATLAB files ... albeit with a .set extension instead of the usual .mat. The EEGLAB files can contain the EEG data in a matlab struct field called data. However, this data field can alternatively contain a link to an accompanying (separate) .fdt (=FloatDaTa) file.

Currently MNE-BIDS only allows the standalone .set files and thorws an error when an .set+.fdt combination is used. This should be fixed, see:

mne-bids/mne_bids/utils.py

Lines 497 to 505 in 7403fc5

# FIXME: We should move the .fdt file together with the .set file and
# give meaningful names to both. Then the .set file (which is a matlab
# .mat file) needs to be read. The EEG matlab structure of the .set
# file contains a field "data", which is a string pointing to the .fdt
# file. This string needs to be updated to the new BIDS name that the
# .set file received while copying.
#
# Unfortunately, there are issues with performing a round-trip of
# reading the .set, modifying it, and saving it again.

error trying to use the function raw_to_bids

Hi,
When using raw_to_bids to convert my CTF data:
raw_to_bids(subject_id='01', run='01', task='WM', raw_fname=raw_fname, events_fname=events_fname, output_path=output_path, event_id=event_id, overwrite=True)
I experienced this error:

TypeError                                 Traceback (most recent call last)
<ipython-input-84-1502beb8bad4> in <module>()
      2                     raw_fname=raw_fname, events_fname=events_fname,
      3                     output_path=output_path, event_id=event_id,
----> 4                     overwrite=True)

/Applications/anaconda/lib/python2.7/site-packages/mne_bids/bids_meg.pyc in raw_to_bids(subject_id, run, task, raw_fname, output_path, events_fname, event_id, hpi, electrode, hsp, config, overwrite, verbose)
    287 
    288     # save stuff
--> 289     _scans_tsv(raw, raw_fname_bids, scans_fname, verbose)
    290     _fid_json(raw, unit, orient, manufacturer, fid_fname, verbose)
    291     _meg_json(raw, task, manufacturer, meg_fname, verbose)

/Applications/anaconda/lib/python2.7/site-packages/mne_bids/bids_meg.pyc in _scans_tsv(raw, raw_fname, fname, verbose)
    106         meas_date = meas_date[0]
    107 
--> 108     acq_time = datetime.fromtimestamp(meas_date
    109                                       ).strftime('%Y-%m-%dT%H:%M:%S')
    110 

TypeError: a float is required

Do you have any ideas?
Thanks a lot
Romain

acq_time format

On master of bids-validator, we get the following error on running tests:

1: Entries in the "acq_time" column of _scans.tsv should be 
expressed in the following format YYYY-MM-DDTHH:mm:ss 
(year, month, day, hour (24h), minute, second; this is equivalent to the 
RFC3339 “date-time” format.  (code: 84 - ACQTIME_FMT)

changing session within a participant

Hi,
I have multiple session per participant and I did not see how to specify the session when using raw_to_bids function? (to output the bids file in a 'ses-02' folder in the same participant). Is it possible?
Thank you!
Romain

Acquisition time info in KIT data

Hi all, I've been testing raw_to_bids with KIT data and encountered a couple minor issues. Putting the first one here:

In _scans_tsv, the retrieval of the acquisition time selects the first element in a list returned by raw.info['meas_date']

acq_time = datetime.fromtimestamp(raw.info['meas_date'][0]).strftime('%Y-%m-%dT%H:%M:%S')

But, unlike .fif files, in .con and .sqd files, raw.info['meas_date'] returns only a single value, which causes an error. A quick fix could be selecting the acquisition time differently based on the file extension (below), but maybe this fails to address an issue in the format of the measurement info in KIT data?

fname, ext = os.path.splitext(raw_fname)
if ext in ['.con', '.sqd']:
    acq_time = datetime.fromtimestamp(raw.info['meas_date']).strftime('%Y-%m-%dT%H:%M:%S')
else:
    acq_time = datetime.fromtimestamp(raw.info['meas_date'][0]).strftime('%Y-%m-%dT%H:%M:%S')

File type suffixes

What's the consensus on handling suffixes like -cov, -epo, or ave in the BIDS file naming scheme? looks like we'd have to use the _proc label e.g., _proc-cov_meg.fif

[BUG] _channels_tsv writes default unit for channel type

mne_bids._channels_tsv writes a unit that is assumed based on the channel type:

units = [_unit2human.get(ch_i['unit'], 'n/a') for ch_i in raw.info['chs']]

This is problematic because different manufacturers save the raw data in different units. Sometimes there are even differences within single manufacturer data formats. Usually, the units are declared in the dataset - when reading the data with MNE-Python, the data gets automatically scaled to e.g., VOLTS so that all MNE-Python objects consistently work in VOLTS.

This is a problem, because we are then copying the non-mne-modified raw data to the bids directory ... and the units will not match.

BrainVision export for MNE

This isn't really an issue to solve but rather an announcement to save duplication of effort. It looks like BrainVision and EDF will be the preferred formats for BIDS-EEG, so it's important that MNE-BIDS has a way of writing to those formats.

I recently implemented a basic BrainVision writer for MNE-Python Raw objects as part my own package of convenience functions. I have plans to extend support to reading and writing MNE-Python Epochs objects.

Following discussions with @robertoostenveld, the core parts of this code will be re-factored and moved to a separate package (planned name: pybv) and will not be dependent on MNE in any way, but rather manipulate simple NumPy arrays for the data and events and a dictionary of attributes. The MNE-based writer functionality will then be reworked to simply be wrappers around this functionality.

Similarly, @robertoostenveld has been kind enough to license his pure-python EDF reader and writer under the 3-clause BSD license. I'll also be packaging that up in the next few weeks and putting that on PyPI as pyEDF. There are already two existing EDF packages available on PyPI; however, they are both dependent on the EDFlib C/C++ library.

After the refactoring, repackaging and adding a good CI test suite, the plan is to offer to transfer ownership of the repositories to INCF. The code will be intentionally minimal in terms of reading and writing arrays and not depending on MNE. That said, there are some rather nice array-based constructors for the MNE object classes, so it should be straightforward to e.g. integrate convenience functions for the writers under an mne.export module and potentially refactor the current readers to use these external libraries. But that's a bridge the broader MNE community can decide whether to cross when we get there.

Again, this is mostly an announcement in a convenient forum because a lot of this discussion has happened in private emails or in person and thus isn't accessible to the MNE-Python/BIDS community.

@choldgraf @sappelhoff This isn't quite just giving the writer over to MNE-BIDS, but it should still help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.