Giter VIP home page Giter VIP logo

mne-realtime's Introduction

MNE-realtime

Warning

This project is discontinued in favor of MNE-LSL. At the moment, MNE-LSL replaces the LSLClient and does not yet support the FieldTrip buffer.

This is a package for realtime analysis of MEG/EEG data with MNE. The documentation can be found here:

Dependencies

Installation

We recommend the Anaconda Python distribution. We require that you use Python 3. You may choose to install mne-realtime via pip.

Besides numpy and scipy (which are included in the standard Anaconda installation), you will need to install the most recent version of MNE using the pip tool:

$ pip install -U mne

Then install mne-realtime:

$ pip install https://api.github.com/repos/mne-tools/mne-realtime/zipball/main

These pip commands also work if you want to upgrade if a newer version of mne-realtime is available. If you do not have administrator privileges on the computer, use the --user flag with pip.

Quickstart

info = mne.io.read_info(op.join(data_path, 'MEG', 'sample',
                        'sample_audvis_raw.fif'))
with FieldTripClient(host='localhost', port=1972,
                     tmax=30, wait_max=5, info=info) as rt_client:
    rt_epochs = RtEpochs(rt_client, event_id, tmin, tmax, ...)
    rt_epochs.start()
    for ev in rt_epochs.iter_evoked():
        epoch_data = ev.data

    # or alternatively, get last n_samples
    rt_epoch = rt_client.get_data_as_epoch(n_samples=500)
    continuous_data = rt_epoch.get_data()

The FieldTripClient supports multiple vendors through the FieldTrip buffer. It can be replaced with other clients such as LSLClient. See API for a list of clients.

Bug reports

Use the github issue tracker to report bugs.

mne-realtime's People

Contributors

charlesbmi avatar drammock avatar jasmainak avatar larsoner avatar massich avatar mscheltienne avatar oori avatar rob-luke avatar sappelhoff avatar teonbrooks avatar timonmerk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mne-realtime's Issues

[ENH] Cleanup Realtime Module

The realtime module has been worked on piecewise and doesn't receive the same attention as other parts of the project. The aim would be to build upon #6120, #6141 with a more unified approach to our realtime clients and servers.

Todos

  • Make examples run in circleci (see mne-tools/mne-python#6141 (comment))
  • Get LSLClient and RtEpochs to play nice. The RtEpochs may need to be cleaned up in order to best serve other clients. I am having a difficult time debugging its interactions with the LSLClient.
  • Get the FieldTrip Realtime examples to run on CircleCI. this requires getting the external dependencies on circle.
  • Restore LSL realtime example on CircleCI
  • Make LSL stream loop over the file
  • Make LSL client work with RtEpochs

How to use a LSL marker stream to get RtEpochs

I am trying to receive EEG data and event triggers via LSL and decode the event-related EEG data in real-time.
The EEG data does not have stim channels, instead there is an LSL stream of triggers (string markers).

In this case, how can I get the epochs in response to the events?
RtEpochs() takes a stim_channel as an argument, but can I use the trigger stream instead?

Thanks in advance!

stopping the fieldtrip client

A confusing behavior for the fieldtrip client was reported to me. It seems that the behavior of tmax argument in FieldTripClient is not super consistent. There is no logging message when tmax is exceeded. Instead users see that isi_max is exceeded ... which is confusing. This should be fixed along with the isi_max behavior discussed here with @Odingod

Emotiv EPOC+ Real Time Data Processing in MNE Python

Hello MNE developers, I have got an issue when I tried to have a real time reading using a commercial EEG brand, Emotiv. I have tried to tweak a bit the code from "Compute real-time evoked response with FieldTrip client" ->https://martinos.org/mne/stable/auto_examples/realtime/ftclient_rt_average.html, because I used to run FieldTrip buffer. But unfortunately, I ran into an error with the traceback:
.
.
.
FieldTripClient: Waiting for server to start
FieldTripClient: Connected
FieldTripClient: Retrieving header
FieldTripClient: Header retrieved
Info dictionary not provided. Trying to guess it from FieldTrip Header object
:2: RuntimeWarning: Info dictionary not provided. Trying to guess it from FieldTrip Header object
tmax=150, wait_max=10) as rt_client:
Traceback (most recent call last):

File "", line 2, in
tmax=150, wait_max=10) as rt_client:

File "C:\ProgramData\Anaconda3\lib\site-packages\mne\realtime\fieldtrip_client.py", line 111, in enter
self.info = self._guess_measurement_info()

File "C:\ProgramData\Anaconda3\lib\site-packages\mne\realtime\fieldtrip_client.py", line 159, in _guess_measurement_info
int(re.findall(r'[^\W\d_]+|\d+', ch)[-1])

ValueError: invalid literal for int() with base 10: 'GYROX'
.
.
.

The latest suggestion after the discussions with some of the core developers:
-> instead of using raw_info = rt_client.get_measurement_info(), try to create my own info using mne function create_info. And I am working on it now.
Any other suggestion for solving this issue will be very welcome :)

Adding modules for real time specific processing

Hereby I want to propose some additional files for the mne-realtime package that would allow for fast computation and data handling of processed data.

rt_filter.py
The MNE filter functions seem fairly slow and thus could resemble some restrictions for high sampling rate processing. Instead bandpass filter can be calculated before the real time processing, and then only be applied using a fast numpy convolution.
(see https://github.com/neuromodulation/py_neuromodulation/blob/main/pyneuromodulation/filter.py as example implementation)

rt_normalization.py
During real time analysis it becomes necessary to normlize data according to a certain timeframe, here the mean or median option could be handy.
(see https://github.com/neuromodulation/py_neuromodulation/blob/main/pyneuromodulation/realtime_normalization.py as example implementation)

rt_features.py
For neural decoding different kind of features (frequency, time, spatial domain) need to be computed, optimally across multiple threads. This could be achieved using a class that calls predefined features routines.
(see https://github.com/neuromodulation/py_neuromodulation/blob/main/pyneuromodulation/features.py for example implementation)

rt_analysis.py
When analyzing streaming data, the data processing needs to be defined. A pandas dataframe with predictions, features and timestamps could be defined for saving data. After data acquisiton ends, the data can be saved using mne_bids. Decoding predictions/performances could be saved in the BIDS derivatives.

How to implement MNE-Realtime with MNEQt-Browser or mne.io.plot() ?

import os.path as op
import subprocess
import mne
from mne.viz import plot_events
from mne.utils import running_subprocess
from mne_realtime import FieldTripClient, RtEpochs

info = mne.io.read_info('<myEEGFileInEDFFormat')
event_id, tmin, tmax = 1, -0.2, 0.5

with FieldTripClient(host='localhost', port=1972,tmax=40, wait_max=5, info=info) as rt_client:
    rt_epochs = RtEpochs(rt_client, event_id, tmin, tmax, proj=None)
    rt_epochs.start()
    for ev in rt_epochs.iter_evoked():
        epoch_data = ev.data
        #mne.io.raw.plot(show=True,block=True,show_options=False,theme='dark',title=None,proj=False)
        print(epoch_data)

    rt_epochs.stop()

I don't want to plot using matplotlib, any snippet on plot using PyQtGraph(MNEQtBrowser) ?

Interest in filtering for RtEpochs?

For online experiments, I added causal filtering to RtEpochs (using lfilter in _process_raw_buffer), which circumvents the need for filtering on the segmented data (and needing longer epochs or risking edge artifacts).

Is there interest to include this in MNE? Then I would tidy up the code and create a PR.
Current limitation of my code is that filter delay is not compensated/events are not shifted, so users should use minimal-phase or IIR filters.

To Do: Project Cleanup

  • Currently some of the urls don't work in the docs.
  • There are dependencies that aren't listed for some of the examples.
  • We need to create a script that can be run if neuromag2ft isn't installed
  • Make a copy of the modified neuromag2ft files to the mne-realtime module

Initialize LSL Client without info specification

When a LSLClient class get's instantiated, e.g. in such way with LSLClient(host="openbci_eeg_id255", wait_max=wait_max) as client:, the base client throws the following error:

File "C:\Users\ICN_admin\Anaconda3\envs\MNERealTime\lib\site-packages\mne_realtime\base_client.py", line 79, in enter
self.info = self._create_info()

File "C:\Users\ICN_admin\Anaconda3\envs\MNERealTime\lib\site-packages\mne_realtime\lsl_client.py", line 129, in _create_info
info = create_info(ch_names, sfreq, ch_types, montage=montage)

TypeError: create_info() got an unexpected keyword argument 'montage'

The recent create_info actually doesn't have the montage parameter https://mne.tools/stable/generated/mne.create_info.html and throws a TypeError instead of a ValueError in lsl_client.py

I adapted this in the second commit in PR #25

Unable to install via pip

Hi!

I tried to install the mne_realtime library from the pip command, "pip install mne_realtime", but I am faced with an error message below. My PC is MacBook Pro 2014 Mid.

MasarunoMacBook-puro:~ masaru$ pip install mne_realtime
Collecting mne_realtime
ERROR: Could not find a version that satisfies the requirement mne_realtime (from versions: none)
ERROR: No matching distribution found for mne_realtime

And, I found that there is another person who has suffered from the same problem.
https://www.reddit.com/r/neuro/comments/d08exy/realtime_analysis_coming_from_mne_python/

So, please help me to install this library via pip.

Thank you in advance.

P.S.
I know that I can install this library by typing the command "python setup.py install" from the "mne_realtime" directory I download from GitHub. But, the pip doesn't work properly.

TMS-EEG development

It would be really great to be able to remove the TMS-artifact in real time, visualize an evoked potential on the topo and as a butterfly with bad channels selected out, apply high and low pass filters.

ImportError: cannot import name '_check_pylsl_installed' from 'mne.utils'

Hi,

I am trying to use mne real-rime and I get this error:
_ImportError: cannot import name 'check_pylsl_installed' from 'mne.utils'
It is used in LSL_Client.py

I checked mne/utils repo on github and there is no such a file, so it is not an issue with a missing file in my environment.

I am a beginner, so maybe I'm missing something?

Thanks

error with LSL

Hi
I would like to use the mne library to read eeg signal in real time with LSL and the OpenBCI gui.
When I run the script plot_lslclient_rt.py , the terminal tells me that the LSL connection works but closes directly after , the error comes from the function client.start_receive_thread(8) .
Here is my error in the terminal :

"Opening raw data file C:\Users\sc01484\mne_data\MNE-sample-data/MEG/sample/sample_audvis_filt-0-40_raw.fif...
Read a total of 4 projection items:
PCA-v1 (1 x 102) idle
PCA-v2 (1 x 102) idle
PCA-v3 (1 x 102) idle
Average EEG reference (1 x 60) idle
Range : 6450 ... 48149 = 42.956 ... 320.665 secs
Ready.
Reading 0 ... 4505 = 0.000 ... 30.003 secs...
Removing projector <Projection | PCA-v1, active : False, n_channels : 102>
Removing projector <Projection | PCA-v2, active : False, n_channels : 102>
Removing projector <Projection | PCA-v3, active : False, n_channels : 102>
Client: Waiting for server to start
Looking for LSL stream openbcigui...
Found stream 'obci_eeg1' via openbcigui...
Client: Connected
Got epoch 1/100
Exception in thread Thread-9:
Traceback (most recent call last):
File "C:\Users\sc01484\Anaconda3\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "C:\Users\sc01484\Anaconda3\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\sc01484\Anaconda3\lib\site-packages\mne_realtime\base_client.py", line 16, in _buffer_recv_worker
for raw_buffer in client.iter_raw_buffers():
File "C:\Users\sc01484\Anaconda3\lib\site-packages\mne_realtime\lsl_client.py", line 83, in iter_raw_buffers
yield np.vstack(samples).T
File "<array_function internals>", line 5, in vstack
File "C:\Users\sc01484\Anaconda3\lib\site-packages\numpy\core\shape_base.py", line 283, in vstack
return _nx.concatenate(arrs, 0)
File "<array_function internals>", line 5, in concatenate
ValueError: need at least one array to concatenate
Streams closed"

Does anyone have a solution to this error?

Add conda-forge package?

Hello, I just wanted to ask if you'd be interested in a conda-forge package of MNE-Realtime? I could build one.

Asking because I'm right now looking at packages that would be nice to include in the MNE installers.

Is this package already compatible with the upcoming 1.0 release of MNE-Python?

mne_realtime.RtEpochs always returning an AssertionError

Using one of the examples provided by your package and my own code I cant get working the RtEpochs function.
Here is a code sample taken from the examples that you prvoide (https://mne.tools/mne-realtime/auto_examples/plot_compute_rt_decoder.html#sphx-glr-auto-examples-plot-compute-rt-decoder-py)

# Authors: Mainak Jas <[email protected]>
#
# License: BSD (3-clause)

import numpy as np
import matplotlib.pyplot as plt

from sklearn import preprocessing
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
from sklearn.model_selection import cross_val_score, ShuffleSplit
from mne.decoding import Vectorizer, FilterEstimator

import mne
from mne.datasets import sample

from mne_realtime import MockRtClient, RtEpochs

print(__doc__)

# Fiff file to simulate the realtime client
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)

tmin, tmax = -0.2, 0.5
event_id = dict(aud_l=1, vis_l=3)

tr_percent = 60  # Training percentage
min_trials = 10  # minimum trials after which decoding should start

# select gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
                       stim=True, exclude=raw.info['bads'])

# create the mock-client object
rt_client = MockRtClient(raw)

# create the real-time epochs object
rt_epochs = RtEpochs(rt_client, event_id, tmin, tmax, picks=picks, decim=1,
                     reject=dict(grad=4000e-13, eog=150e-6), baseline=None,
                     isi_max=4.)

# start the acquisition
rt_epochs.start()

# send raw buffers
rt_client.send_data(rt_epochs, picks, tmin=0, tmax=90, buffer_size=1000)

# Decoding in sensor space using a linear SVM
n_times = len(rt_epochs.times)

scores_x, scores, std_scores = [], [], []

# don't highpass filter because it's epoched data and the signal length
# is small
filt = FilterEstimator(rt_epochs.info, None, 40, fir_design='firwin')
scaler = preprocessing.StandardScaler()
vectorizer = Vectorizer()
clf = LogisticRegression(solver='lbfgs')

concat_classifier = Pipeline([('filter', filt), ('vector', vectorizer),
                              ('scaler', scaler), ('svm', clf)])

data_picks = mne.pick_types(rt_epochs.info, meg='grad', eeg=False, eog=False,
                            stim=False, exclude=raw.info['bads'])
ax = plt.subplot(111)
ax.set_xlabel('Trials')
ax.set_ylabel('Classification score (% correct)')
ax.set_title('Real-time decoding')
ax.set_xlim([min_trials, 50])
ax.set_ylim([30, 105])
plt.axhline(50, color='k', linestyle='--', label="Chance level")
plt.show(block=False)

for ev_num, ev in enumerate(rt_epochs.iter_evoked()):
    if ev_num >= 50:  # stop at 50
        break

    print("Just got epoch %d" % (ev_num + 1))

    if ev_num == 0:
        X = ev.data[np.newaxis, data_picks, :]
        y = int(ev.comment)  # the comment attribute contains the event_id
    else:
        X = np.concatenate((X, ev.data[np.newaxis, data_picks, :]), axis=0)
        y = np.append(y, int(ev.comment))

    if ev_num >= min_trials and ev_num % 5 == 0:
        cv = ShuffleSplit(5, test_size=0.2, random_state=42)  # 3 for speed
        scores_t = cross_val_score(concat_classifier, X, y, cv=cv,
                                   n_jobs=1) * 100

        std_scores.append(scores_t.std())
        scores.append(scores_t.mean())
        scores_x.append(ev_num)

        # Plot accuracy

        plt.plot(scores_x[-2:], scores[-2:], '-x', color='b',
                 label="Classif. score")
        ax.plot(scores_x[-1], scores[-1])

        hyp_limits = (np.asarray(scores) - np.asarray(std_scores),
                      np.asarray(scores) + np.asarray(std_scores))
        fill = plt.fill_between(scores_x, hyp_limits[0], y2=hyp_limits[1],
                                color='b', alpha=0.5)
        plt.pause(0.01)
        plt.draw()
        ax.collections.remove(fill)  # Remove old fill area

plt.fill_between(scores_x, hyp_limits[0], y2=hyp_limits[1], color='b',
                 alpha=0.5)
plt.draw()  # Final figure

It returns the following:

None
Opening raw data file C:\Users\mne_data\MNE-sample-data/MEG/sample/sample_audvis_filt-0-40_raw.fif...
    Read a total of 4 projection items:
        PCA-v1 (1 x 102)  idle
        PCA-v2 (1 x 102)  idle
        PCA-v3 (1 x 102)  idle
        Average EEG reference (1 x 60)  idle
    Range : 6450 ... 48149 =     42.956 ...   320.665 secs
Ready.
Reading 0 ... 41699  =      0.000 ...   277.709 secs...
Not setting metadata
No baseline correction applied
4 projection items activated
Traceback (most recent call last):
  File "C:/Users/Documents/test.py", line 40, in <module>
    rt_epochs = RtEpochs(rt_client, event_id, tmin, tmax, picks=picks, decim=1,
  File "<decorator-gen-469>", line 21, in __init__
  File "C:\Users\anaconda3\lib\site-packages\mne_realtime\epochs.py", line 155, in __init__
    super(RtEpochs, self).__init__(
  File "<decorator-gen-190>", line 21, in __init__
  File "C:\Users\anaconda3\lib\site-packages\mne\epochs.py", line 547, in __init__
    self._check_consistency()
  File "C:\Users\anaconda3\lib\site-packages\mne\epochs.py", line 557, in _check_consistency
    assert isinstance(self.drop_log, tuple)
AssertionError

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.