Giter VIP home page Giter VIP logo

mtrfpy's People

Contributors

berndie avatar britta-wstnr avatar olebialas avatar ruix6 avatar sappelhoff avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

mtrfpy's Issues

Questions about the `TRF().train()` method

Hi mTRFpy developers,

I have some questions about TRF().train() method.

  1. The k value in the TRF().train() function. In the comment, it writes that the default is set to 5, but I see that in the function argument, it is set to k=-1. So just to confirm, the default is the leave-one-out cross validation, right?

  2. In the original MATLAB implementation, we are able to partition data into separate training and test sets with mTRFpartition function. Is there an equivalent function implemented in this library? Or that we need to manually split the training and test set first, feed the training set to the TRF().train() to train and cross-eval the model, and then do TRF().predict() with the test set?

Backward model with one-dimensional feature

Hi!

Thanks for making the a python version of mTRF toolbox!

I would like to address following issue: the backward model is currently not working for one-dimensional features. The basic usage example of the backward model (https://mtrfpy.readthedocs.io/en/latest/basics.html#backward-model)

envelope = [s.mean(axis=1) for s in stimulus]
bwd_trf = TRF(direction=-1)
bwd_trf.train(envelope, response, fs, tmin, tmax, regularization=1000)
r_bwd, mse_bwd = cross_validate(bwd_trf, stimulus, response)
print(f"correlation between actual and predicted envelope: {r_bwd.round(3)}")

This code is working. However it uses stimulus (i.e., the spectrogram with 16 frequency band). If you use the same code but replace stimulus with envelope (r_bwd, mse_bwd = cross_validate(bwd_trf, envelope, response)), an assertion error occurs (assert x.ndim == 2 and y.ndim == 2). By reformatting the shape of the one-dimensional feature (similar to stimulus) different errors popped up.

Thanks!

Not preloading covariance matrices causes error in crossval

When setting preload=False in the TRF instance, crossval produces an error. I don't have time to fix this right now, so I'll just leave a minimal example here:

from mtrf import TRF, load_sample_data
from mtrf.stats import crossval

tmin, tmax = 0, 0.4
regularization = 1000
stimulus, response, fs = load_sample_data(n_segments=10)

# This workds
trf = TRF()
crossval(trf, stimulus, response, fs, tmin, tmax, regularization)

# This does not
trf = TRF(preload=False)
crossval(trf, stimulus, response, fs, tmin, tmax, regularization)

[JOSS] tests fail locally

Running the testsuite in a fresh environment locally, I get failures (linux, python 3.11).

I see that Python 3.11 is not yet covered in your test suit, I'll add it in #9

(mtrf) stefanappelhoff@arc-lin-004309:~/Desktop/mTRFpy$ pytest --verbose
======================================================================================= test session starts ========================================================================================
platform linux -- Python 3.11.4, pytest-7.4.0, pluggy-1.2.0 -- /home/stefanappelhoff/miniconda3/envs/mtrf/bin/python3.11
cachedir: .pytest_cache
rootdir: /home/stefanappelhoff/Desktop/mTRFpy
collected 12 items                                                                                                                                                                                 

tests/test_basics.py::test_check_data PASSED                                                                                                                                                 [  8%]
tests/test_basics.py::test_lag_matrix PASSED                                                                                                                                                 [ 16%]
tests/test_basics.py::test_arithmatic PASSED                                                                                                                                                 [ 25%]
tests/test_matlab_examples.py::test_encoding PASSED                                                                                                                                          [ 33%]
tests/test_matlab_examples.py::test_decoding PASSED                                                                                                                                          [ 41%]
tests/test_matlab_examples.py::test_transform FAILED                                                                                                                                         [ 50%]
tests/test_model.py::test_train PASSED                                                                                                                                                       [ 58%]
tests/test_model.py::test_predict PASSED                                                                                                                                                     [ 66%]
tests/test_model.py::test_test PASSED                                                                                                                                                        [ 75%]
tests/test_model.py::test_save_load PASSED                                                                                                                                                   [ 83%]
tests/test_stats.py::test_crossval PASSED                                                                                                                                                    [ 91%]
tests/test_stats.py::test_permutation PASSED                                                                                                                                                 [100%]

============================================================================================= FAILURES =============================================================================================
__________________________________________________________________________________________ test_transform __________________________________________________________________________________________

    def test_transform():
        transform_results = np.load(  # expected results
            root / "results" / "transform_results.npy", allow_pickle=True
        ).item()
    
        t = transform_results["t"]
        w = transform_results["w"]
        direction = transform_results["dir"][0, 0]
    
        trf_decoder = TRF(direction=-1)
        tmin, tmax = -0.1, 0.2
        trf_decoder.train(stimulus, response, fs, tmin, tmax, 100)
        trf_trans_enc = trf_decoder.to_forward(response)
    
        scale = 1e-5
>       np.testing.assert_almost_equal(trf_trans_enc.weights * scale, w * scale, decimal=11)

tests/test_matlab_examples.py:79: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../miniconda3/envs/mtrf/lib/python3.11/contextlib.py:81: in inner
    return func(*args, **kwds)
../../miniconda3/envs/mtrf/lib/python3.11/contextlib.py:81: in inner
    return func(*args, **kwds)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

args = (<function assert_array_almost_equal.<locals>.compare at 0x7f4aafab5da0>, array([[[-1.63891735e-01, -1.75839906e-01, -... 3.50612939e-03,  2.12364773e-02,  3.57734865e-02, ...,
         -1.11449796e-02, -2.29089935e-02, -2.88814115e-02]]]))
kwds = {'err_msg': '', 'header': 'Arrays are not almost equal to 11 decimals', 'precision': 11, 'verbose': True}

    @wraps(func)
    def inner(*args, **kwds):
        with self._recreate_cm():
>           return func(*args, **kwds)
E           AssertionError: 
E           Arrays are not almost equal to 11 decimals
E           
E           Mismatched elements: 1 / 81920 (0.00122%)
E           Max absolute difference: 1.52442529e-11
E           Max relative difference: 1.7259e-06
E            x: array([[[-1.63891734619e-01, -1.75839906370e-01, -1.81312251773e-01,
E                    ..., -6.04563894823e-02, -3.13816863207e-02,
E                     6.20892295951e-03],...
E            y: array([[[-1.63891734619e-01, -1.75839906370e-01, -1.81312251773e-01,
E                    ..., -6.04563894823e-02, -3.13816863207e-02,
E                     6.20892295950e-03],...

../../miniconda3/envs/mtrf/lib/python3.11/contextlib.py:81: AssertionError
========================================================================================= warnings summary =========================================================================================
tests/test_matlab_examples.py::test_encoding
tests/test_matlab_examples.py::test_decoding
tests/test_matlab_examples.py::test_transform
tests/test_model.py::test_predict
  /home/stefanappelhoff/Desktop/mTRFpy/mtrf/matrices.py:139: RuntimeWarning: invalid value encountered in matmul
    cov_xx[i_x] = x_lag.T @ x_lag

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
===================================================================================== short test summary info ======================================================================================
FAILED tests/test_matlab_examples.py::test_transform - AssertionError: 
======================================================================= 1 failed, 11 passed, 4 warnings in 134.96s (0:02:14) =======================================================================

[REVIEW]: mTRFpy: A Python package for temporal response function analysis

As a part of JOSS review

mTRFpy is derived from widely used MATLAB mTRF package, which is commonly used for modelling multivariate stimulus-response neurophysiological data. Found few previous works on the same line

a) (Uncited) - pymTRF(https://github.com/SRSteinkamp/pymtrf); More python<-->matlab friendly
b) (Cited) - Eelbrain 0.39
c) (Cited) - naplib

As referenced (b), Eelbrain 0.39 is an Open-source Python toolkit for MEG and EEG data analysis including modelling TRFs. The paper assert that 'mTRFpy' is a specialized dedicated package for mTRFs.

Parallely, the open source package "https://github.com/SRSteinkamp/pymtrf" by @SRSteinkamp is also the Python version of mTRF MATLAB package. It is also a lighter and application-specific as mTRFpy.

Considering the above,

i) Can mTRFpy be considered more user-friendly or straightforward compared to pymTRF (a)? If so, could you provide an illustrative example for comparison?
ii) Is there a discernible performance disparity, such as mTRFpy's capacity to handle large datasets more efficiently than Eelbrain 0.39 and naplib or pymTRF?
iii) Is mTRF capable of applications beyond Neuroscience? Highlighting any additional scopes would enhance its relevance to a broader scientific audience.

Suggestion: Given the similarity between mTRFpy and the open-source, yet unpublished project pymtrf (a), it might be worth engaging the author of pymTRF for insights and potential collaboration.

thank you and happy to hear your suggestions
saran

[JOSS] link repository to Zenodo and add CITATION.cff

You can integrate zenodo and Github: https://docs.github.com/en/repositories/archiving-a-github-repository/referencing-and-citing-content

later in the JOSS submission process you will have to make an archive of your software anyhow, see:

https://joss.readthedocs.io/en/latest/submitting.html#the-review-process

Upon successful completion of the review, authors will make a tagged release of the software, and deposit a copy of the repository with a data-archiving service such as Zenodo or figshare, get a DOI for the archive, and update the review issue thread with the version number and DOI.

I strongly recommend Zenodo for its excellent integration with GitHub.

A CITATION.cff file can be used to control the metadata on zenodo: https://help.zenodo.org/faq/#github

A CITATION.cff file can also be a nice opportunity to give contributors credit.

See also my comments here: #8 (comment)

return best_regularization in the TRF.test

I think it is still a good idea to return the best_regularization, because it is decided based on the mse of the training fold but we just return the r and mse for the testing fold

[JOSS] additional docs

  • xref: openjournals/joss-reviews#5657

  • please describe the release process of the software, e.g., in the contributor documentation -- potentially listing maintainers.

  • please add how to setup up a developer environment / installation for contributors

  • please add how to run the test suite, how to format the code (black), and how to build the docs locally (see also some changes in #9)

Multi-channel mTRFs ?

Hello,

First of all, thanks for this great initiative.

I am trying to use your library to perform an mTRF analysis, but I am encountering some issues. My understanding is that, it should be able to handle cases where you have multiple features AND multiple channels.

So for example, if I have data where there is only say, one feature and multi-channel brain output say, 5, I should not have an issue to use your library. But I am facing this issue. And I think it is because of the data verification located precisely at lines 50-53. In my case, this fails since I have the following: for every trial, my input has dimension (1,) (only one feature, this would have been (1, N) for N features) and my output has dimension (128,) (for 128 channels, this would have been (128, N) for N features and 128 channels).

Is there any way to make this example work ?

To replicate it, you can use the following sample data:

stimulus = array(
[[-0.04975775],
[-0.04975775],
[-0.04975775],
[-0.04975775],
[-0.04975775]]
)
response = array(
[[ 2.14178848, 1.66653419, -0.46084577, -0.75411433, -1.61435759],
[ 2.21314073, 1.86260605, -0.23229724, -0.75092149, -1.85647023],
[ 2.14174962, 1.98497665, -0.07170469, -0.9689309 , -2.16715288],
[ 1.92276275, 1.92720366, -0.01842625, -1.10627818, -2.35880566],
[ 1.55253088, 1.63285232, -0.05710639, -1.19575059, -2.18979287]]
)

Thank you very much.

P.S. Aside, I know it is a lot of work for you, I also noticed that the docstrings of the function _check_data is not up to date, possibly due some changes in prototypes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.