Giter VIP home page Giter VIP logo

superbrucejia / eeg-dl Goto Github PK

View Code? Open in Web Editor NEW
859.0 19.0 214.0 960 KB

A Deep Learning library for EEG Tasks (Signals) Classification, based on TensorFlow.

Home Page: https://www.nitrc.org/projects/eeg_dl_library

License: MIT License

Python 88.37% MATLAB 11.63%
deep-learning eeg-classification eeg-signals-processing tensorflow motor-imagery-classification eeg-data cnn rnn gcn one-shot-learning

eeg-dl's Introduction



Chat on Gitter Python Version TensorFlow Version MIT License


Welcome to EEG Deep Learning Library

EEG-DL is a Deep Learning (DL) library written by TensorFlow for EEG Tasks (Signals) Classification. It provides the latest DL algorithms and keeps updated.

Table of Contents

Documentation

The supported models include

No. Model Codes
1 Deep Neural Networks DNN
2 Convolutional Neural Networks [Paper] [Tutorial] CNN
3 Deep Residual Convolutional Neural Networks [Paper] ResNet
4 Thin Residual Convolutional Neural Networks [Paper] Thin ResNet
5 Densely Connected Convolutional Neural Networks [Paper] DenseNet
6 Fully Convolutional Neural Networks [Paper] FCN
7 One Shot Learning with Siamese Networks (CNNs Backbone)
[Paper] [Tutorial]
Siamese Networks
8 Graph Convolutional Neural Networks
[Paper] [Presentation]
GCN / Graph CNN
9 Deep Residual Graph Convolutional Neural Networks [Paper] ResGCN
10 Densely Connected Graph Convolutional Neural Networks DenseGCN
11 Bayesian Convolutional Neural Network
via Variational Inference [Paper]
Bayesian CNNs
12 Recurrent Neural Networks [Paper] RNN
13 Attention-based Recurrent Neural Networks [Paper] RNN with Attention
14 Bidirectional Recurrent Neural Networks [Paper] BiRNN
15 Attention-based Bidirectional Recurrent Neural Networks [Paper] BiRNN with Attention
16 Long-short Term Memory [Paper] LSTM
17 Attention-based Long-short Term Memory [Paper] LSTM with Attention
18 Bidirectional Long-short Term Memory [Paper] BiLSTM
19 Attention-based Bidirectional Long-short Term Memory [Paper] BiLSTM with Attention
20 Gated Recurrent Unit [Paper] GRU
21 Attention-based Gated Recurrent Unit [Paper] GRU with Attention
22 Bidirectional Gated Recurrent Unit [Paper] BiGRU
23 Attention-based Bidirectional Gated Recurrent Unit [Paper] BiGRU with Attention
24 Attention-based BiLSTM + GCN [Paper] Attention-based BiLSTM
GCN
25 Transformer [Paper] [Paper] Transformer
26 Transfer Learning with Transformer
(This code is only for reference!)
(You can modify the codes to fit your data.)
Stage 1: Pre-training
Stage 2: Fine Tuning

One EEG Motor Imagery (MI) benchmark is currently supported. Other benchmarks in the field of EEG or BCI can be found here.

No. Dataset Tutorial
1 EEG Motor Movement/Imagery Dataset Tutorial

The evaluation criteria consists of

Evaluation Metrics Tutorial
Confusion Matrix Tutorial
Accuracy / Precision / Recall / F1 Score / Kappa Coefficient Tutorial
Receiver Operating Characteristic (ROC) Curve / Area under the Curve (AUC) -
Paired-wise t-test via R language Tutorial

The evaluation metrics are mainly supported for four-class classification. If you wish to switch to two-class or three-class classification, please modify this file to adapt to your personal Dataset classes. Meanwhile, the details about the evaluation metrics can be found in this paper.

Usage Demo

  1. (Under Any Python Environment) Download the EEG Motor Movement/Imagery Dataset via this script.

    $ python MIND_Get_EDF.py
    
  2. (Under Python 2.7 Environment) Read the .edf files (One of the raw EEG signals formats) and save them into Matlab .m files via this script. FYI, this script must be executed under the Python 2 environment (Python 2.7 is recommended) due to some Python 2 syntax. If using Python 3 environment to run the file, there might be no error, but the labels of EEG tasks would be totally messed up.

    $ python Extract-Raw-Data-Into-Matlab-Files.py
    
  3. Preprocessed the Dataset via the Matlab and save the data into the Excel files (training_set, training_label, test_set, and test_label) via these scripts with regards to different models. FYI, every lines of the Excel file is a sample, and the columns can be regarded as features, e.g., 4096 columns mean 64 channels X 64 time points. Later, the models will reshape 4096 columns into a Matrix with the shape 64 channels X 64 time points. You should can change the number of columns to fit your own needs, e.g., the real dimension of your own Dataset.

  4. (Prerequsites) Train and test deep learning models under the Python 3.6 Environment (Highly Recommended) for EEG signals / tasks classification via the EEG-DL library, which provides multiple SOTA DL models.

    Python Version: Python 3.6 (Recommended)
    TensorFlow Version: TensorFlow 1.13.1
    

    Use the below command to install TensorFlow GPU Version 1.13.1:

    $ pip install --upgrade --force-reinstall tensorflow-gpu==1.13.1 --user
  5. Read evaluation criterias (through iterations) via the Tensorboard. You can follow this tutorial. When you finished training the model, you will find the "events.out.tfevents.***" in the folder, e.g., "/Users/shuyuej/Desktop/trained_model/". You can use the following command in your terminal:

    $ tensorboard --logdir="/Users/shuyuej/Desktop/trained_model/" --host=127.0.0.1

    You can open the website in the Google Chrome (Highly Recommended).

    http://127.0.0.1:6006/

    Then you can read and save the criterias into Excel .csv files.

  6. Finally, draw beautiful paper photograph using Matlab or Python. Please follow these scripts.

Notice

  1. I have tested all the files (Python and Matlab) under the macOS. Be advised that for some Matlab files, several Matlab functions are different between Windows Operating System (OS) and macOS. For example, I used "readmatrix" function to read CSV files in the MacOS. However, I have to use “csvread” function in the Windows because there was no such "readmatrix" Matlab function in the Windows. If you have met similar problems, I recommend you to Google or Baidu them. You can definitely work them out.

  2. For the GCNs-Net (GCN Model), for the graph Convolutional layer, the dimensionality of the graph will be unchanged, and for the max-pooling layer, the dimensionality of the graph will be reduced by 2. That means, if you have N X N graph Laplacian, after the max-pooling layer, the dimension will be N/2 X N/2. If you have a 15-channel EEG system, it cannot use max-pooling unless you selected 14 --> 7 or 12 --> 6 --> 3 or 10 --> 5 or 8 --> 4 --> 2 --> 1, etc. The details can be reviewed from this paper.

  3. The Loss Function can be changed or modified from this file.

  4. The Dataset Loader can be changed or modified from this file.

Research Ideas

  1. Dynamic Graph Convolutional Neural Networks [Paper Survey] [Paper Reading]

  2. Neural Architecture Search / AutoML (Automatic Machine Learning) [Tsinghua AutoGraph]

  3. Reinforcement Learning Algorithms (e.g., Deep Q-Learning) [Tsinghua Tianshou] [Doc for Chinese Readers]

  4. Bayesian Convolutional Neural Networks [Paper] [Thesis] [Codes]

  5. Transformer / Self-attention / Non-local Modeling [Paper Collections] [Transformer Codes] [Non-local Modeling PyTorch Codes]

    [Why Non-local Modeling?] [Paper] [A Detailed Presentation] [Slides] [Poster]

    [Why Transformer?]

    [Transformer and Attention Mechanism Introduction]

    [视觉Transformer年度进展评述 (in Chinese)]

  6. Self-supervised Learning + Transformer [Presentation]

Common Issues

  1. ValueError: Cannot feed value of shape (1024, 1) for Tensor 'input/label:0', which has shape '(1024,)'

    To solve this issue, you have to squeeze the shape of the labels from (1024, 1) to (1024,) using np.squeeze. Please edit the DataLoader.py file. From original codes:

    train_labels = pd.read_csv(DIR + 'training_label.csv', header=None)
    train_labels = np.array(train_labels).astype('float32')
    
    test_labels = pd.read_csv(DIR + 'test_label.csv', header=None)
    test_labels = np.array(test_labels).astype('float32')

    to

    train_labels = pd.read_csv(DIR + 'training_label.csv', header=None)
    train_labels = np.array(train_labels).astype('float32')
    train_labels = np.squeeze(train_labels)
    
    test_labels = pd.read_csv(DIR + 'test_label.csv', header=None)
    test_labels = np.array(test_labels).astype('float32')
    test_labels = np.squeeze(test_labels)
  2. InvalidArgumentError: Nan in summary histogram for training/logits/bias/gradients

    To solve this issue, you have to comment all the histogram summary. Please edit the GCN_Model.py file.

    # Comment the above tf.summary.histogram from the GCN_Model.py File
    
    # # Histograms.
    # for grad, var in grads:
    #     if grad is None:
    #         print('warning: {} has no gradient'.format(var.op.name))
    #     else:
    #         tf.summary.histogram(var.op.name + '/gradients', grad)
    
    def _weight_variable(self, shape, regularization=True):
        initial = tf.truncated_normal_initializer(0, 0.1)
        var = tf.get_variable('weights', shape, tf.float32, initializer=initial)
        if regularization:
            self.regularizers.append(tf.nn.l2_loss(var))
        # tf.summary.histogram(var.op.name, var)
        return var
    
    def _bias_variable(self, shape, regularization=True):
        initial = tf.constant_initializer(0.1)
        var = tf.get_variable('bias', shape, tf.float32, initializer=initial)
        if regularization:
            self.regularizers.append(tf.nn.l2_loss(var))
        # tf.summary.histogram(var.op.name, var)
        return var
  3. TypeError: len() of unsized object

    To solve this issue, you have to change the coarsen level to your own needs, and you can definitely change it to see the difference. Please edit the main-GCN.py file. For example, if you want to implement the GCNs-Net to a 10-channel EEG system, you have to set "levels" equal to 1 or 0 because there is at most only one max-pooling (10 --> 5). And you can change argument "level" to 1 or 0 to see the difference.

    # This is the coarsen levels, you can definitely change the level to observe the difference
    graphs, perm = coarsening.coarsen(Adjacency_Matrix, levels=5, self_connections=False)

    to

    # This is the coarsen levels, you can definitely change the level to observe the difference
    graphs, perm = coarsening.coarsen(Adjacency_Matrix, levels=1, self_connections=False)
  4. tensorflow.python.framework.errors_impl.InvalidArgumentError: Received a label value of 7 which is outside the valid range of [0, 7). Label values: 5 2 3 3 1 5 5 4 7 4 2 2 1 7 5 6 3 4 2 4

    To solve this issue, for the GCNs-Net, when you make your dataset, you have to make your labels from 0 rather than 1. For example, if you have seven classes, your labels should be 0 (First class), 1 (Second class), 2 (Third class), 3 (Fourth class), 4 (Fifth class), 5 (Sixth class), 6 (Seventh class) instead of 1, 2, 3, 4, 5, 6, 7.

  5. IndexError: list index out of range

    To solve this issue, first of all, please double-check your Python Environment. Python 2.7 Environment is required. Besides, please install 0.1.11 version of pyEDFlib. The installation instruction is as follows:

    $ pip install pyEDFlib==0.1.11

Structure of the Code

At the root of the project, you will see:

├── Download_Raw_EEG_Data
│   ├── Extract-Raw-Data-Into-Matlab-Files.py
│   ├── MIND_Get_EDF.py
│   ├── README.md
│   └── electrode_positions.txt
├── Draw_Photos
│   ├── Draw_Accuracy_Photo.m
│   ├── Draw_Box_Photo.m
│   ├── Draw_Confusion_Matrix.py
│   ├── Draw_Loss_Photo.m
│   ├── Draw_ROC_and_AUC.py
│   └── figure_boxplot.m
├── LICENSE
├── Logo.png
├── MANIFEST.in
├── Models
│   ├── DatasetAPI
│   │   └── DataLoader.py
│   ├── Evaluation_Metrics
│   │   └── Metrics.py
│   ├── Initialize_Variables
│   │   └── Initialize.py
│   ├── Loss_Function
│   │   └── Loss.py
│   ├── Network
│   │   ├── BiGRU.py
│   │   ├── BiGRU_with_Attention.py
│   │   ├── BiLSTM.py
│   │   ├── BiLSTM_with_Attention.py
│   │   ├── BiRNN.py
│   │   ├── BiRNN_with_Attention.py
│   │   ├── CNN.py
│   │   ├── DNN.py
│   │   ├── DenseCNN.py
│   │   ├── Fully_Conv_CNN.py
│   │   ├── GRU.py
│   │   ├── GRU_with_Attention.py
│   │   ├── LSTM.py
│   │   ├── LSTM_with_Attention.py
│   │   ├── RNN.py
│   │   ├── RNN_with_Attention.py
│   │   ├── ResCNN.py
│   │   ├── Siamese_Network.py
│   │   ├── Thin_ResNet.py
│   │   └── lib_for_GCN
│   │       ├── DenseGCN_Model.py
│   │       ├── GCN_Model.py
│   │       ├── ResGCN_Model.py
│   │       ├── coarsening.py
│   │       └── graph.py
│   ├── __init__.py
│   ├── main-BiGRU-with-Attention.py
│   ├── main-BiGRU.py
│   ├── main-BiLSTM-with-Attention.py
│   ├── main-BiLSTM.py
│   ├── main-BiRNN-with-Attention.py
│   ├── main-BiRNN.py
│   ├── main-CNN.py
│   ├── main-DNN.py
│   ├── main-DenseCNN.py
│   ├── main-DenseGCN.py
│   ├── main-FullyConvCNN.py
│   ├── main-GCN.py
│   ├── main-GRU-with-Attention.py
│   ├── main-GRU.py
│   ├── main-LSTM-with-Attention.py
│   ├── main-LSTM.py
│   ├── main-RNN-with-Attention.py
│   ├── main-RNN.py
│   ├── main-ResCNN.py
│   ├── main-ResGCN.py
│   ├── main-Siamese-Network.py
│   └── main-Thin-ResNet.py
├── NEEPU.png
├── Preprocess_EEG_Data
│   ├── For-CNN-based-Models
│   │   └── make_dataset.m
│   ├── For-DNN-based-Models
│   │   └── make_dataset.m
│   ├── For-GCN-based-Models
│   │   └── make_dataset.m
│   ├── For-RNN-based-Models
│   │   └── make_dataset.m
│   └── For-Siamese-Network-One-Shot-Learning
│       └── make_dataset.m
├── README.md
├── Saved_Files
│   └── README.md
├── requirements.txt
└── setup.py

Citation

If you find our library useful, please considering citing our papers in your publications. We provide a BibTeX entry below.

@article{hou2022gcn,
	title   = {{GCNs-Net}: A Graph Convolutional Neural Network Approach for Decoding Time-Resolved EEG Motor Imagery Signals},
        author  = {Hou, Yimin and Jia, Shuyue and Lun, Xiangmin and Hao, Ziqian and Shi, Yan and Li, Yang and Zeng, Rui and Lv, Jinglei},
	journal = {IEEE Transactions on Neural Networks and Learning Systems},
	volume  = {},
	number  = {},
	pages   = {1-12},
	year    = {Sept. 2022},
	doi     = {10.1109/TNNLS.2022.3202569}
}
  
@article{hou2020novel,
	title     = {A Novel Approach of Decoding EEG Four-class Motor Imagery Tasks via Scout {ESI} and {CNN}},
	author    = {Hou, Yimin and Zhou, Lu and Jia, Shuyue and Lun, Xiangmin},
	journal   = {Journal of Neural Engineering},
	volume    = {17},
	number    = {1},
	pages     = {016048},
	year      = {Feb. 2020},
	publisher = {IOP Publishing},
	doi       = {10.1088/1741-2552/ab4af6}
	
}

@article{hou2022deep,
	title   = {Deep Feature Mining via the Attention-Based Bidirectional Long Short Term Memory Graph Convolutional Neural Network for Human Motor Imagery Recognition},
	author  = {Hou, Yimin and Jia, Shuyue and Lun, Xiangmin and Zhang, Shu and Chen, Tao and Wang, Fang and Lv, Jinglei},   
	journal = {Frontiers in Bioengineering and Biotechnology},      
	volume  = {9},      
	year    = {Feb. 2022},      
	url     = {https://www.frontiersin.org/article/10.3389/fbioe.2021.706229},       
	doi     = {10.3389/fbioe.2021.706229},      
	ISSN    = {2296-4185}
}

@article{Jia2020AttentionGCN,
	title   = {Attention-based Graph {ResNet} for Motor Intent Detection from Raw EEG signals},
	author  = {Jia, Shuyue and Hou, Yimin and Lun, Xiangmin and Lv, Jinglei},
	journal = {arXiv preprint arXiv:2007.13484},
	year    = {2022}
}

Our papers can be downloaded from:

  1. A Novel Approach of Decoding EEG Four-class Motor Imagery Tasks via Scout ESI and CNN
    Codes and Tutorials for this work can be found here.

Overall Framework:

Project1

Proposed CNNs Architecture:

Project1

  1. GCNs-Net: A Graph Convolutional Neural Network Approach for Decoding Time-resolved EEG Motor Imagery Signals
    Slides Presentation for this work can be found here.
Project2

  1. Deep Feature Mining via Attention-based BiLSTM-GCN for Human Motor Imagery Recognition
    Slides Presentation for this work can be found here.
Project3.1
Project4.1

  1. Attention-based Graph ResNet for Motor Intent Detection from Raw EEG signals

Other Useful Resources

I think the following presentations would be helpful when you guys get engaged with Python and TensorFlow.

  1. Python Environment Setting-up Tutorial download

  2. Usage of Cloud Server and Setting-up Tutorial download

  3. TensorFlow for Deep Learning Tutorial download

Contribution

We always welcome contributions to help make EEG-DL Library better. If you would like to contribute or have any questions, please don't hesitate to email me at [email protected].

Organizations

The library was created and open-sourced by Shuyue Jia, supervised by Prof. Yimin Hou, at the School of Automation Engineering, Northeast Electric Power University, Jilin, Jilin, China.

eeg-dl's People

Contributors

superbrucejia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eeg-dl's Issues

Some confusion about your thesis

Dear Shuyue,
Recently, I have read your paper, GCNs-Net: A Graph Convolutional Neural Network Approach for Decoding Time-resolved EEG Motor Imagery Signals,which is an outstanding work. At the same time, I am studying GCN. Having read your paper carefully, I think your proposed GCNs-Net model is used to solve the problem of graph classification based on EEG signals. The raw signal was used as input to the GCN model, and and the other input is the graph representation derived from all signals. I have some confusion about the thesis.
One of my confusions is that the input raw signal are N x N matrices, is N=8 ? and 8 x 8 means 64 channels? Why are these signals not input as a graph?
Another confusion I had was that all the signals were used to calculate the Pearson coefficient matrix to represent the relationship between the electrodes. This was then transformed into a graph representation as an input to the GCN. But the network topology between electrodes is different in different states, so why don't different states correspond to different graphs to input into the GCN and classify?
I am very sorry for disturbing you and your answer will help me a lot.
best !
image

a IndexError in Extract-Raw-Data-Into-Matlab-Files.py


IndexError Traceback (most recent call last)
in ()
6 X = 'X_' + str(i)
7 Y = 'Y_' + str(i)
----> 8 X, Y = load_raw_data(electrodes=electrodes, subject=subject, num_classes=nclasses)
9 X = np.squeeze(X)
10

in load_raw_data(electrodes, subject, num_classes, long_edge)
180 except:
181 pass
--> 182 return np.array(trials, dtype=np.float64).reshape((len(trials),) + trials[0].shape + (1,)), np.array(labels, dtype=np.float64)
183
184

IndexError: list index out of range

how to debug it, thank you for helping.

Training Transformer Model

Hi, I'm trying to train Transformer Model code with "EEG Motor Movement/Imagery Dataset".

I found that the maxlen = 3, embed_dim = 97 in main-Transformer.py code.

The current input shape of transformer model is 3 * 97 = 291.

However, after I ran make_dataset.m code, I got 4096 (64 * 64) data shape.

So I wonder why the input shape of main-Transformer.py is 291 instead of 4096.

How to run transformer model

Currently, I am stuck in preprocessing. Which preprocessor should I use to generate corresponding files, eg. training_set.csv, training_label.csv, test_set.csv, test_label.csv.

TF2.x

你好,会有2.x系列的模型吗

EEG-Motor-Imagery-Classification-CNNs-TensorFlow

i am using Read_Raw_Data_Save_Into_Matlab_Files.py for physionet dataset and error occurs is

X_105_C5, y_105_C5 = load_raw_data(electrodes=electrodes, subject=subject, num_classes=nclasses)
File " \Read_Raw_Data_Save_Into_Matlab_Files.py", line 200, in load_raw_data
return np.array(trials, dtype=np.float64).reshape((len(trials),) + trials[0].shape + (1,)), np.array(labels, dtype=np.float64)
IndexError: list index out of range

Can you help me to sort out this or provide link for dataset with labels

Features from BiLSTM to Graph Adjacency Matrix

Hi, thanks for your nice work!

I am wondering how to generate the adjacency matrix from the features of BiLSTM, as mentioned in "Deep Feature Mining via Attention-based BiLSTM-GCN for Human Motor Imagery Recognition"

I guess your BiLSTM should be channel-wise but I cannot find a clear statement in your paper. If so, only FC considers the spatial relationship, would it be effective enough?

Friendly tips

Hello, I happened to see someone hanging your code on a second-hand trading website for sale. This behavior is very bad. I hope the author can see it.

Problems encountered when using my dataset

Thank you very much for your outstanding work. I plan to use your GCN framework to train my dataset. My dataset has only 2 subjects and 16 channels. Before training, I have used your make_dataset.m file to create my dataset. What I want to do is two classification. Before training, I modify some parameters of main_GCN.m

Hyper-parameters

params = dict()
params['dir_name'] = Model
params['num_epochs'] = 100
params['batch_size'] = 256#1024
params['eval_frequency'] = 100#100

Architecture.

params['F'] = [16, 32, 64, 128, 256, 512] # Number of graph convolutional filters.
params['K'] = [2, 2, 2, 2, 2, 2] # Polynomial orders.
params['p'] = [2, 2, 2, 2] # Pooling sizes.
params['M'] = [2] # Output dimensionality of fully connected layers.

I have made changes to other common problems you mentioned on GitHub. When I run the code, I encountered such a problem.

Traceback (most recent call last):
File "D:/数据分类/EEG-DL-master/Models/main-GCN.py", line 83, in
accuracy, loss, t_step = model.fit(X_train, train_labels, X_test, test_labels)
File "D:\数据分类\EEG-DL-master\Models\Network\lib_for_GCN\GCN_Model.py", line 343, in fit
batch_data, batch_labels = train_data[idx, :], train_labels[idx]
IndexError: index 37488 is out of bounds for axis 0 with size 9600

Process finished with exit code 1

I found that my test_set size is 9600×16, Could it be because my dataset is too small?Can you give me some suggestions. Thank you very much for your reply. and apologies for disturbing you.

Best!

Error in GCN Model

I change the code to fit in my dataset for GCN but when I try to fit my model, I take the following error in fit function at GCN_Model.py:
Attempt to convert a value (<tensorflow.python.framework.ops.Graph object at 0x000001C06CCFBA90>) with an unsupported type (<class 'tensorflow.python.framework.ops.Graph'>) to a Tensor. Anyone knows what cause this error and how can I fix it?

Issues about GCN_MODEL data processing

作者您好,阅读您的代码让我感觉到学习到很多东西,很有用处,但是有个小问题想请问一下您。
在GCN_model中,您是如何得到dataset.mat的,我认为您的处理方式如下,不知道我理解的对不对。您得到的dataset.mat文件中,每个mat文件的shape是[20,84,640],我理解是20代表被试,640代表每次实验的跨度时间4×采样频率160=640(这个代表时间),那其中84=4×21,这里面的21是什么意思呢。
现在想要用在自己的数据集上,我可否尝试使用您的代码,我的数据格式是每个被试就单独的一个csv文件,文件内包括不同的动作,其中每一行是代表一个时间节点,每一列是代表一个通道,我可否直接将所有的被试csv纵向叠加,这样得到的结果和您的数据集traing_set格式是一样的吗?
提前谢谢您,祝您一切顺利。
(Hello author, reading your code makes me feel that I have learned a lot and it is very useful, but I have a small question to ask you.
In GCN_model, how did you get the dataset.mat? I think you handled it in the following way, I don't know if I understand it right. The shape of each mat file in the dataset.mat file you got is [20,84,640], I understand that 20 represents the subjects, 640 represents the span time of each experiment 4sampling frequency 160=640 (this represents time), then where 84=421, what does the 21 in this mean.
Now I want to use it on my own dataset, can I try to use your code, my data format is a separate csv file for each subject, the file includes different actions, where each line is representing a time node, each column is representing a channel, can I directly superimpose all the subjects csv vertically, so that the results obtained and your dataset traing_set format is the same?
Thank you in advance and wish you all the best.)

About training set data and training labels

hello Sir:
I used the make_data.m file to generate the train_set.xlsx file with a quantity of 10279, but the generated train_label.xlsx file with a quantity of 15120. This is where things go wrong。
train_label
train_set

Missing files in sdist

It appears that the manifest is missing at least one file necessary to build
from the sdist for version 1.1b0. You're in good company, about 5% of other
projects updated in the last year are also missing files.

+ /tmp/venv/bin/pip3 wheel --no-binary eeg-dl -w /tmp/ext eeg-dl==1.1b0
Looking in indexes: http://10.10.0.139:9191/root/pypi/+simple/
Collecting eeg-dl==1.1b0
  Downloading http://10.10.0.139:9191/root/pypi/%2Bf/e27/50c838567b641/eeg-dl-1.1b0.tar.gz (6.0 kB)
    ERROR: Command errored out with exit status 1:
     command: /tmp/venv/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-wheel-415swdkb/eeg-dl/setup.py'"'"'; __file__='"'"'/tmp/pip-wheel-415swdkb/eeg-dl/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-wheel-415swdkb/eeg-dl/pip-egg-info
         cwd: /tmp/pip-wheel-415swdkb/eeg-dl/
    Complete output (7 lines):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-wheel-415swdkb/eeg-dl/setup.py", line 13, in <module>
        with open(path.join(here, 'DL_Models', '__init__.py'), encoding='utf-8') as f:
      File "/usr/lib/python3.8/codecs.py", line 905, in open
        file = builtins.open(filename, mode, buffering)
    FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-wheel-415swdkb/eeg-dl/DL_Models/__init__.py'
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

EEG input shape for transformer model

I am training the transformer for EEG input shape as (26,11000) where "26" is number of channels(electrodes) and 11k are features. I am not able to understand how I should apply such input to the code.

#GCN,make_dataset.m

作者你好,我在使用MATLAB运行make_dataset.m时,出现下述报错,我查询了百度,根据建议我禁止excel的com加载项,但结果还是不行,请问这是我电脑的内存问题还是什么别的呢?
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.