Comments (11)
I will have a try with that right now actually since it is giving unexpected results with dict. we can still see which interface we prefer after.
from braindecode.
-
Requiring mne object-awareness seems fine to me. We rely on mne a lot and hence also expect users to know mne and the mne API. We sould also keep
apply_function()
(https://mne.tools/stable/generated/mne.io.Raw.html#mne.io.Raw.apply_function) in mind for custom transforms. -
I think, we first need to know whether on-the-fly transforms are fast enough to be used with lazy loading. Would storing transforms in the datasets then be in addition to the current transforms?
from braindecode.
Requiring mne object-awareness seems fine to me. We rely on mne a lot and hence also expect users to know mne and the mne API. We sould also keep `apply_function()` (https://mne.tools/stable/generated/mne.io.Raw.html#mne.io.Raw.apply_function) in mind for custom transforms.
True, apply_function
would help here, and it conveniently works in-place! We could rewrite _transform
then:
def _transform(raw_or_epochs, transforms):
for transform, transform_kwargs in transforms.items():
if callable(transform):
raw_or_epochs.apply_function(transform, transform_kwargs)
else:
if not hasattr(raw_or_epochs, transform):
raise AttributeError(
f'MNE object does not have {transform} method.')
getattr(raw_or_epochs.load_data(), transform)(**transform_kwargs)
I think, we first need to know whether on-the-fly transforms are fast enough to be used with lazy loading.
I think there is no way around this with super large datasets, don't you think? Unless we want to keep preprocessing everything offline, saving the preprocessed files to disk, and then loading them at training time.
Another point here is that if we were to work directly on numpy arrays (instead of on MNE objects) we might get a little performance gain with some transforms.
Would storing transforms in the datasets then be in addition to the current transforms?
That was part of my question - the way we had implemented it during the sprint was the transforms were stored in the dataset, and either applied in the __init__
(if loading == eager) or in __getitem__
(if loading == lazy). The new transform_concat_ds
function takes care of the first case, but outside of the dataset object. Do we need both approaches? I liked the solution with a list of transforms stored inside the dataset object as it's pretty flexible and also follows torch-vision. The transform_concat_ds
function is nice too though, but it limits us to the eager loading scenario I think.
from braindecode.
I would argue for first having a clean and simple implementation of the eager/already preprocessed lazy loading version. From that we can still extend to on the fly, which we may only need on per-crop basis in the end anyways. Better to have clean fully implemented version that runs on actual examples and going from there.
from braindecode.
ok no we have a working version for eager case. I still would like to make some small improvements to the usage, e.g., right now we supply an OrderedDict, while we should supply simply a list. We don't use the dict functionality as far as I see(?) and we already created a bug/unintended behavior in an example!
braindecode/examples/plot_bcic_iv_2a_moabb.py
Lines 83 to 88 in 914d1c9
The first apply_function is overwritten and never called! @gemeinl
So let's remove the OrderedDict and simply supply list. Another alternative as discussed before would be to call them line by line with some syntax like:
transform_concat_ds(pick_types, eeg=True, meg=False, stim=False)
transform_concat_ds(apply_function, fun=lambda x: x * 1e6, channel_wise=False)
Which one you guys prefer? @hubertjb @sliwy @gemeinl
from braindecode.
Also, as we have confused @agramfort ;) #90 (comment) maybe we can also think about different name for the function? Like apply_to_raws
or apply_to_concat_ds
? Although I feel transform_raws
would be fine for me (ducking from @agramfort ;))
from braindecode.
from braindecode.
@robintibor what would the list look like for the example you linked? I suppose something like this?
raw_transforms = [
["pick_types", dict(eeg=True, meg=False, stim=False)],
['apply_function', dict(fun=lambda x: x*1e6, channel_wise=False)],
['filter', dict(l_freq=low_cut_hz, h_freq=high_cut_hz)],
['apply_function', dict(fun=standardize_func, channel_wise=False)]
]
from braindecode.
Yes like that @hubertjb, so really just removing wrapping in OrderedDict
from braindecode.
I'm personally in favor of the list approach then. I like the convenience of having all the transforms in a single variable that can be reused if need to.
from braindecode.
oki, now implemented in #94
from braindecode.
Related Issues (20)
- Making Unit Tests Faster by Using Simulated Data
- What is the best way to generate multiple views of a sample (window) in Braindecode? HOT 1
- Confusution between description and the code in the BandStop augmentation
- [docs] Model overview HOT 1
- Discrepancy in EEGNet Implementation: Extra Conv2D Layer HOT 1
- release 0.8 HOT 4
- Subject: ModuleNotFoundError: No module named 'braindecode.training' after installing Braindecode 0.8 HOT 7
- [Question] what difference between `braindecode.EEGClassifier` and `skorch.NeuralNetClassifier` HOT 3
- Annotations are now always set in MOABB HOT 1
- Problem with Input format HOT 1
- Implement GCNs-Net Model for EEG Motor Imagery Signal Decoding in EEGdecoder Repository
- Implement GCNs-Net Model for EEG Motor Imagery Signal Decoding in EEGdecoder Repository HOT 1
- predict_trials() returns ground truth labels from Dataset as trial_labels? HOT 2
- compute_amplitude_gradients
- when running in VScode Juptyer, kernel crashes HOT 1
- Fix Issue with EEGClassifier Handling Small Training Sets and Unexpected Class Generation for Binary Classification HOT 1
- Avoid old parameters in the signature and use from now on "*," in function
- TCN and HybridNet are not working with 3 dims input
- Improve the whole function documentation
- This is not a issue but I have an important question HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from braindecode.