Giter VIP home page Giter VIP logo

eeg-conformer's Introduction

EEG-Conformer

EEG Conformer: Convolutional Transformer for EEG Decoding and Visualization [Paper]

Core idea: spatial-temporal conv + pooling + self-attention

News

🎉🎉🎉 We've joined in braindecode toolbox. Use here for detailed info.

Thanks to Bru and colleagues for helping with the modifications.

Abstract

Network Architecture

  • We propose a compact convolutional Transformer, EEG Conformer, to encapsulate local and global features in a unified EEG classification framework.
  • The convolution module learns the low-level local features throughout the one-dimensional temporal and spatial convolution layers. The self-attention module is straightforwardly connected to extract the global correlation within the local temporal features. Subsequently, the simple classifier module based on fully-connected layers is followed to predict the categories for EEG signals.
  • We also devise a visualization strategy to project the class activation mapping onto the brain topography.

Requirements:

  • Python 3.10
  • Pytorch 1.12

Datasets

Please use consistent train-val-test split when comparing with other methods.

Citation

Hope this code can be useful. I would appreciate you citing us in your paper. 😊

@article{song2023eeg,
  title = {{{EEG Conformer}}: {{Convolutional Transformer}} for {{EEG Decoding}} and {{Visualization}}},
  shorttitle = {{{EEG Conformer}}},
  author = {Song, Yonghao and Zheng, Qingqing and Liu, Bingchuan and Gao, Xiaorong},
  year = {2023},
  journal = {IEEE Transactions on Neural Systems and Rehabilitation Engineering},
  volume = {31},
  pages = {710--719},
  issn = {1558-0210},
  doi = {10.1109/TNSRE.2022.3230250}
}

eeg-conformer's People

Contributors

eeyhsong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

eeg-conformer's Issues

About SEED dataset

Can you please share your preprocessing and training code about the SEED dataset? Especially in the data loading and augmentation part, many parameters are hard-coded, its hard to understand.

accuracy lower than that in the paper

Hi,

I edited the input data from dataset 2a which included both data and label. I run the code and found that each subject's accuracy is lower than that you described in the paper. What should I do? for example, my subject 1 best accuracy is 77% while yours is 88. my second subject is 47% while yours is 61%. other subjects keep the same trend.

Here is the link for my input data, just in case my input data is wrong: https://drive.google.com/file/d/1WQaB4G2i6z8r6drgbJ9gF5aZCrA_Ydek/view?usp=sharing

Thanks,

Yameng

Regarding target category in visualization scripts

It seems that the visualization script is missing a way to select which target category to visualize. In the CAT.py script, a variable called target_category exists but is not used anywhere. While reviewing the utils.py script I found some references to a similarly named variable but it is set to None . I was wondering if the script lacks an exposed argument to the GradCAM class where the target_category can be defined. I would really appreciate your help to define which categories to plot, please since I am trying to reproduce the results.

promote on dataset1(Dataset 2a) S05

I found that using my preprocessing program below

% 获取文件夹中的所有 gdf 文件
gdfFolder = './BCICIV_2a_gdf/'; % 指定文件夹路径
matFolder = './true_labels/';
outFolder = './Data/';
gdfFiles = dir(fullfile(gdfFolder, '*.gdf')); % 获取文件夹中的所有 gdf 文件
nfiles = length(gdfFiles); % 获取文件个数
disp(nfiles);
% 逐个处理 gdf 文件
for k = 1:nfiles
    % 获取当前文件名
    [gdfpath,gdfname,gdfExt] = fileparts(gdfFiles(k).name); % 获取文件名
    gdfpath = fullfile(gdfFolder, [gdfname,gdfExt]); % 获取完整路径
    matpath = fullfile(matFolder, [gdfname,'.mat']);
    % 读取 gdf 文件
    [signal, HDR] = sload(gdfpath); % 读取信号和头文件
    load(matpath);
    label = classlabel;

    % 获取事件信息
    % Event type   Description
    % 276   0x0114 Idling EEG (eyes open)
    % 277   0x0115 Idling EEG (eyes closed)
    % 768   0x0300 Start of a trial
    % 769   0x0301 Cue onset left (class 1)
    % 770   0x0302 Cue onset right (class 2)
    % 771   0x0303 Cue onset foot (class 3)
    % 772   0x0304 Cue onset tongue (class 4)
    % 783   0x030F Cue unknown
    % 1023  0x03FF Rejected trial
    % 1072  0x0430 Eye movements
    % 32766 0x7FFE Start of a new run
    event_pos = HDR.EVENT.POS; % 获取事件发生的位置(采样点)
    event_typ = HDR.EVENT.TYP; % 获取事件类型(数字编码)
    event_dur = HDR.EVENT.DUR; % 获取事件持续时间(数字)
    
    % 创建数据列表
    data = []; % 创建一个空列表
    cnt = 1;
    for i = 1:length(event_pos) % 对每个事件进行循环
        event_type = event_typ(i); % 获取事件类型
        if event_type == 768 % 如果事件类型为0x0300
            start_pos = event_pos(i) + 500; % 获取事件开始位置
            end_pos = start_pos + event_dur(i) - 1 - 875; % 获取事件结束位置
            data(cnt,:,:) = transpose(signal(start_pos:end_pos,1:22)); % 将信号中对应的信息添加到列表中
            cnt =  cnt + 1;
        end
    end

    data = fillmissing(data,'constant',0);

    % 设计6阶切比雪夫II型带通滤波器
    Fs = HDR.SampleRate; % 采样频率
    Wp = [0.1 60] / (Fs/2); % 通带截止频率归一化
    Ws = [0.05 100] / (Fs/2); % 阻带截止频率归一化
    Rp = 1; % 通带纹波
    Rs = 40; % 阻带衰减
    [n, Wn] = cheb2ord(Wp, Ws, Rp, Rs); % 计算滤波器阶数和截止频率
    [b, a] = cheby2(n, Rp, Wn); % 计算滤波器系数

    % 对信号进行滤波
    for i = 1:length(label)
        data(i,:,:) = filter(b,a,data(i,:,:));
    end
    
    % 保存为 mat 文件
    save([outFolder,gdfname,'.mat'], 'data', 'label'); % 保存为 mat 文件,包括信号、头文件、标记数列和数据列表
    
    % 显示进度信息
    disp(['Processed file ' num2str(k) ' of ' num2str(nfiles)]);
end

% 完成处理
disp('All files processed.');

would efficiently promote on S05 as:

acc: 52.08% --> 76.02%

and the other maintain a mild fluctuation
I believe that it's only a preprocessing problem, hope this will help :D

Input data example

Could you upload one input data for Conformer python file to run as an example?

It did not mention the requirement for input data. I am confused.

Thanks.

Dataset used in conformer.py

First, thank you for open-source your code.
And I'm wondering where the .mat data you used in conformer.py, to be specific, line #278, #294 is from.
It seems neither the data format of BCI_competition_IV2a/IV2b, which is .gdf, nor the data format of SEED. These are three datasets claimed in README.md.
Thank you for clarifying.

KeyError: 'label'

作者,您好
非常感谢您开源代码
请教您一下,我在运行中遇到这个问题?
微信图片_20230327095007

Question About CAT

Hi!
I'm impressed with your visualization method CAT in paper. After reading the paper, I got some questions about CAT and model accuracy.

Q1. In the visualization file CAT.py, the inputs of the file is 'data', is this 'data' from original train data which shape is (288, 1, 22, 1000) after standardization or is it generated in some middle process?
data = np.load('./grad_cam/train_data.npy')

Q2. After running the conformer several times, i found the accuracy suprisingly lower than the paper results, so would it possible to share the pre-trained weights of the conformer?

Thank you for sharing your excellent work.
Best Wishes !

About the release the model

Hi! Thank you for your valuable work. I would like to kindly ask if you plan to release the pre-trained models.

The difference between EEG-Transformer and EEG-Conformer

Thank you for sharing your work! I noticed that you proposed the EEG-Transformer model in https://github.com/eeyhsong/EEG-Transformer. The EEG-Transformer achieved an accuracy of 82.59% on the 2a dataset, which surpasses the performance of the EEG-Conformer. However, when I attempted to reproduce these results using both models, I obtained a similar accuracy with the EEG-Conformer (around 78%), but only 66.47% with the EEG-Transformer. I would appreciate it if you could clarify the differences between these two models and advise which one I should cite as a reference in my research. I am looking forward to your prompt response.

Invite do add your model to braindecode

Hi @eeyhsong!

I was reading the paper this week, and very nice paper. I was wondering if you are interested in integrating your model into braindecode? From what I looked over, your code seems compatible with what we have in braindecode (Pytorch and skorch), and you could do a tutorial to explain the mechanisms. This increases your paper engagement, brings citations, and helps with the open-source community.

I can help review and create tests for our model. Please let me know if this fits with your goals.

Missing SoftMax function

Hi

Thanks for sharing your implementation. Really appreciate it.
Even though it is stated that a SoftMax activation function has been sued in the classification part, in your implementation you have not used this function in the last layer of your model.
could you clarify whether there is a reason or if I'm just missing something?
Thanks.

Question about using SEED dataset

Hi,

I'm impressed with your paper.

After reading the paper, I got some questions about using and evaluating the SEED dataset.

In the paper, you mentioned that each session contains 3394 trials, segmented from the original data using a non-overlapped one-second time window. However, I want to know more details about dataset settings.

Q1. Is the input shape of the data (62 x 200, # of channels x one-second data with sampling rate of 200 Hz)?

Q2. The number of subjects is 15, with three sessions for each subject. In addition, each session consists of 15 clips. If you follow the train&test setting in Zheng et al., Did you also split the train set as the first 9 data and the test set as the last 6 data? If this is not the case, could you explain more about this?

W.-L. Zheng and B.-L. Lu, "Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks," IEEE Transactions on autonomous mental development, Vol. 7, No. 3, 2015, pp. 162-175.

Q3. If Q2 is right, did you average results from 45 sessions (15 subjects x 3 sessions) for calculating results?

Q4. If it's alright with you, could you share your preprocessing code about the SEED dataset?

Thank you for sharing your excellent research.

Reproducing the results of Graz data set A, 4 class motor imagery

Thank you for sharing your code! It is rally nice to be able to reproduce results.
I have to admit that I am confused on how to prepare the dataset.
Once I am able to understand I might submit a PR explaining it in the README of this project.

  1. Is it possible to get all data using only the gdf files? I was trying to do that, but I realized that there is a problem with the labels for the test data. Here is my code for extracting data epochs (data windows) from the gdf. It works for the train data, but not for the test data.
        filename = self.root + 'A0%dT.gdf' % self.nSub
        print("Train subject data",filename)
        raw = mne.io.read_raw_gdf(filename)
        # Find the events time positions
        events, _ = mne.events_from_annotations(raw)
        # Pre-load the data
        raw.load_data()
        # Filter the raw signal with a band pass filter in 7-35 Hz
        raw.filter(7., 35., fir_design='firwin')
        # Remove the EOG channels and pick only desired EEG channels
        raw.info['bads'] += ['EOG-left', 'EOG-central', 'EOG-right']
        picks = mne.pick_types(raw.info, meg=False, eeg=True, eog=False, stim=False,
                       exclude='bads')
        # Extracts epochs of 3s time period from the datset into 288 events for all 4 classes
        #tmin, tmax = 1.0, 4.0
        tmin, tmax = 0, 4 - (1/250) #because fs = 250
        # left_hand = 769,right_hand = 770,foot = 771,tongue = 772
        event_id = dict({'769': 7,'770': 8,'771': 9,'772': 10})
        epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
                baseline=None, preload=True)
        train_data = epochs.get_data()
        train_labels = epochs.events[:,-1] - 7 + 1    

It seems that there is only one class in the test data 769 (left_hand).

  1. I also tried the mat files provided http://bnci-horizon-2020.eu/database/data-sets
    such as: http://bnci-horizon-2020.eu/database/data-sets/001-2014/A01T.mat
    But I do not see where the labels are?

  2. Now I am trying:

  1. I also verified that my code gives the right labels for A01T:
    My code from 1)
    4, 3, 2, 1, 1, 2, 3, 4, 2, 3, 1, 1, 1, 4, 2, 2, 1, 1, 3, 1, 2, 4,
    and when using your code with the true labels:
    4, 3, 2, 1, 1, 2, 3, 4, 2, 3, 1, 1, 1, 4, 2, 2, 1, 1, 3, 1, 2, 4,

Please comment on 1), 2), 3).

2b dataset

Thank you for sharing. I would like to know how to process 2b data. I saw your another work, EEG-Transformer. The getData.m seems to be able to process data, but it seems to only target 2a data. How to modify it to handle 2b data, which has 5 parts per subject. Additionally, in your code, function get_source_data is also aimed at 2a data. How to modify it so that it can read in 2b data.

Accuracy lower than that in the paper

I downloaded data BCI-IV-2a from the official website. Referring to your paper, I used the python MNE package for preprocessing. Running your Conformerd code on GitHub without filtering has an average accuracy of only 68.98%; If you add filtering, the accuracy is lower.May I ask why this is?

nice work

Looking fowards to your new contributions. 👍

paper acc problem

Dear author, I recently found a problem of low accuracy in your model. A01acc was 0.78, A02acc was 0.50, and A03 was also about 0.86, but the acc of S5 was unexpectedly up to 75. I used mne for pre-processing and extracted 2-6s and motion imagination data with ID 768. In addition, data standardization was carried out in the pre-processing stage (the get data module in the original paper has been adjusted to the data after its own pre-processing), but I did not modify other parts of the code. avg acc is around 73
Another occurrence is that not doing bandpass filtering seems to improve the accuracy (multiple experimental results).
Here is my python code (unfiltered)

import random
from collections import Counter

import mne
import numpy as np
import torch
from sklearn.preprocessing import StandardScaler,OneHotEncoder
import scipy.io
import torchvision.transforms as transforms
from sklearn.model_selection import train_test_split
from scipy import signal


# 设置随机种子
seed_value = 42  # 可以根据需要选择任意整数作为种子值

# Python内置的随机模块
random.seed(seed_value)

# NumPy随机数生成器
np.random.seed(seed_value)

# PyTorch随机数生成器
torch.manual_seed(seed_value)
torch.cuda.manual_seed(seed_value)
torch.cuda.manual_seed_all(seed_value)

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

"""进行预处理前请检查数据是否存在过多的噪声,或者数据是否稳定"""
"""If you want to use this file, please make sure not have noise in your original data """

# 进行数据预处理 只适合T结尾的data数据,其他的需要修改函数 filename需以.npy结尾
def transform_save_data(filename,save_filename=None):
    raw = mne.io.read_raw_gdf(filename)
    print(raw.info['ch_names'])
    events,event_id = mne.events_from_annotations(raw)
    raw.info['bads'] += ['EOG-left','EOG-central','EOG-right']
    # 运动想象时间2-6秒
    tmin,tmax = 2, 6
    event_id = {'768': 6}
    # 需要重新加载raw对象进行滤波处理
    raw.load_data()
    # iir_params = dict(order=6, ftype='cheby2', rs=60)  # Chebyshev Type II滤波器
    # raw.filter(l_freq=8, h_freq=35, method='iir', iir_params=iir_params)
    # raw.filter(8.0, 35.0, fir_design='IIR')
    picks = mne.pick_types(raw.info,meg=False,eeg=True,stim=False,exclude='bads')
    epochs = mne.Epochs(raw=raw,events=events,event_id=6,tmin=tmin,tmax=tmax,preload=True,baseline=None,picks=picks)
    epoch_data = epochs.get_data()
    # 将最后一位数据进行去除
    epoch_data = epoch_data[:, :, :-1]


    epoch_data = epoch_data.reshape(epoch_data.shape[0], 1, 22, 1000)
    if save_filename is not None:
        np.save(save_filename,epoch_data)


# 进行归一化处理
def data_processing(BCI_IV_2a_data,label_filename):
    Scaler = StandardScaler()
    X_train = BCI_IV_2a_data.reshape(BCI_IV_2a_data.shape[0], 22000)
    X_train_Scaler = Scaler.fit_transform(X_train)
    # 进行reshape第二个维度为channels W H
    acc_train = X_train_Scaler.reshape(BCI_IV_2a_data.shape[0], 1, 22, 1000)
    data_label = scipy.io.loadmat(label_filename)
    print(data_label['classlabel'].reshape(288))
    Label = data_label['classlabel'].reshape(288)


    return acc_train,Label


# 进行转换成Tensor格式的数据  保存的文件的格式应该以pt为后缀
def data_transform_tensor(acc_train,y_oh,save_datafilename=None,save_labelfilename=None):
    # transf = transforms.ToTensor()
    # d = transf(y_oh)
    # # 去除另外四个维度的标签,标签就是最大值
    # label = torch.argmax(d,dim=2).long()

    


    data = torch.tensor(acc_train,dtype=torch.float32)
    labels = torch.tensor(y_oh,dtype=torch.long)
    if save_datafilename is not None:
        torch.save(data,save_datafilename)
    if save_labelfilename is not None:
        torch.save(labels,save_labelfilename)

    return data, labels



# 将数据进行联合
def combine_data(data_list,label_list,data_filename,label_filename):
    """_summary_
    将增强的EEG_data数据进行拼接 并保存为pt后缀文件
    Parameters
    ----------
    data_list : _type_  tensor
        EEGdata list
    label_list : _type_ tensor
        label list
    data_filename : _type_, optional
        _description_, by default None 
    label_filename : _type_, optional
        _description_, by default None 
    """
    data_combine = torch.cat(data_list, axis=0)
    label_combine = torch.cat(label_list, axis=0)
    torch.save(data_combine, data_filename)
    torch.save(label_combine, label_filename)

    return data_combine, label_combine


# 数据滤波 利用巴特沃斯滤波器
def buttferfiter(data):
    Fs = 250
    b, a = signal.butter(6, [8, 30], 'bandpass', fs=Fs)
    data = signal.filtfilt(b, a, data, axis=1)
    return data


# 进行时域上EEG数据增强  通过分割,重构 打乱数据
def interaug(timg, label, batch_size):
    """timg是data label是标签"""
    """
    tmp_aug_data 用于保存生成的增强样本数据,其形状为 (batch_size / 4, 1, 22, 1000),
    即每个增强样本包含8个时间片段,每个时间片段的形状为 (1, 22, 125)
    
    
    rand_idx 是随机选择的8个时间片段的索引,用于从原始数据中获取时间片段。
    aug_data 和 aug_label 分别保存所有类别的增强样本和对应的标签。
    aug_shuffle 对增强样本和标签进行随机打乱。

    """
    aug_data = []
    aug_label = []
    for cls4aug in range(4):
        # 条件判断 找出对应的label和data
        cls_idx = np.where(label == cls4aug+1)  # label == cls4aug + 1
        tmp_data = timg[cls_idx]
        tmp_label = label[cls_idx]
        # 分epoch
        tmp_aug_data = np.zeros((int(batch_size / 4), 1, 22, 1000))
        for ri in range(int(batch_size / 4)):
            # 随机取8个时间片段
            for rj in range(8):
                rand_idx = np.random.randint(0, tmp_data.shape[0], 8)
                # 进行数据的打乱重构
                tmp_aug_data[ri, :, :, rj * 125:(rj + 1) * 125] = tmp_data[rand_idx[rj], :, :, rj * 125:(rj + 1) * 125]

        aug_data.append(tmp_aug_data)
        aug_label.append(tmp_label[:int(batch_size / 4)])
    aug_data = np.concatenate(aug_data)
    aug_label = np.concatenate(aug_label)
    aug_shuffle = np.random.permutation(len(aug_data))
    aug_data = aug_data[aug_shuffle, :, :]
    aug_label = aug_label[aug_shuffle]

    aug_data = torch.from_numpy(aug_data).cuda()
    aug_data = aug_data.float()
    aug_label = torch.from_numpy(aug_label).cuda()  # aug_label - 1
    aug_label = aug_label.long()
    return aug_data, aug_label



# 切分部分数据进行test
def split_EEGdata(data, label):
    data = data.view(data.shape[0], 1, 22, 1000)
    data = data[:100]
    label = label[:100]
    torch.save(data, 'EEG_data_split.pt')
    torch.save(label, 'EEG_label_split.pt')


#  没有进行滤波处理原数据获取
def transform_save_data_version2(filename,save_filename=None):
    """
    The processing gain the raw data from the EEG
    :param filename: data filename
    :param save_filename: save data filename
    :return: save a np style file
    """
    raw = mne.io.read_raw_gdf(filename)
    print(raw.info['ch_names'])
    events,event_id = mne.events_from_annotations(raw)
    raw.info['bads'] += ['EOG-left','EOG-central','EOG-right']
    # 运动想象时间2-6秒
    tmin,tmax = 2,6
    event_id = {'768': 6}
    # 需要重新加载raw对象进行滤波处理
    raw.load_data()
    picks = mne.pick_types(raw.info,meg=False,eeg=True,stim=False,exclude='bads')
    epochs = mne.Epochs(raw=raw,events=events,event_id=event_id,tmin=tmin,tmax=tmax,preload=True,baseline=None,picks=picks)
    epoch_data = epochs.get_data(copy=True)
    # 将最后一位数据进行去除
    epoch_data = epoch_data[:,:,:-1]


    epoch_data = epoch_data.reshape(epoch_data.shape[0], 1, 22, 1000)
    if save_filename is not None:
        np.save(save_filename,epoch_data)

    return epoch_data












if __name__ == '__main__':
    count = input('please input your subject ID:')
    filename = 'C:\\Users\\24242\\Desktop\\AI_Reference\\data_bag\\BCICIV_2a_gdf\\A0' + count + 'T.gdf'
    BCI_data = transform_save_data_version2(filename)
    label_filename = 'C:\\Users\\24242\\Desktop\\AI_Reference\\data_bag\\BCICIV_2a_gdf\\A0' + count + 'T.mat'
    acc_train, y_oh = data_processing(BCI_data, label_filename)
    data, label = data_transform_tensor(acc_train, y_oh)

    # data = data.reshape(data.shape[0], 22, 1000)
    data = np.array(data)
    label = np.array(label)
    # print(label)
    data1, label1 = interaug(data, label, batch_size=288)

    data = torch.from_numpy(data)
    label = torch.from_numpy(label)
    print(data.type())
    print(label.type())
    # data1 = torch.from_numpy(data1)
    # label1 = torch.from_numpy(label1)
    data_list = [data.to('cuda'), data1.to('cuda')]
    label_list = [label.to('cuda'), label1.to('cuda')]

    data_filename = '../EEG-dataprocessing/2a/paper_data_label/A0'+ count + '_combine/A0'+ count + '_combine_data.pt'
    label_filename = '../EEG-dataprocessing/2a/paper_data_label/A0'+ count + '_combine/A0'+ count +'_combine_label.pt'

    data_combine, label_combine = combine_data(data_list, label_list, data_filename, label_filename)


    data_combine = data_combine.detach().cpu().numpy()
    label_combine = label_combine.detach().cpu().numpy()
    print(data_combine.shape)
    print(label_combine.shape)
    train_data, test_data, train_label, test_label = train_test_split(data_combine, label_combine, test_size=0.2, train_size=0.8, shuffle=True)
    train_data = torch.from_numpy(train_data).float()
    test_data = torch.from_numpy(test_data).float()
    train_label = torch.from_numpy(train_label).long()
    test_label = torch.from_numpy(test_label).long()
    torch.save(train_data, '../EEG-dataprocessing/2a/paper_data_label/A0' + count + '_combine/train_data_A0' + count + '.pt')
    torch.save(test_data, '../EEG-dataprocessing/2a/paper_data_label/A0' + count + '_combine/test_data_A0' + count + '.pt')
    torch.save(train_label, '../EEG-dataprocessing/2a/paper_data_label/A0' + count + '_combine/train_label_A0' + count + '.pt')
    torch.save(test_label, '../EEG-dataprocessing/2a/paper_data_label/A0' + count + '_combine/test_label_A0' + count + '.pt')
    print(train_data.shape)
    print(test_data.shape)
    print(train_label.shape)
    print(test_label.shape)

Transforming of original data

It's not clear to me how the original data sets (for example the .gdf files from the motor imagery task) were transformed into the specific format of the .mat files that are called in the script. I would very much appreciate if you could share either the code for the transformation or the transformed data directly (which the script can be directly called on).

runtime error in tSNE.py

Hi,I run tSNE.py and got an error: AttributeError: module 'matplotlib.pyplot' has no attribute 'Set1', I'd like to know how to solve this problem

seed的数据集有很多,请问您用的具体是哪个呢?

您好,我下载了seed数据集,但是发现里面有很多,比如 Processed,Extracted_1s, Extract_4s,还有别的什么的,请问您用的是哪个?
另外我看到很多论文会有DE特征啥的,您这篇是只用原数据吗?如果用在那些DE特征上可以吗?

train_data.npy in CAT.py

Hi! Thank you for sharing the code!

I have a question about the data you use in CAT visualization. There is one line data = np.load('./grad_cam/train_data.npy') but I couldn't find the corresponding file. Could you explain what this file is? What kind of data it contained? Thx!

Some questions about conformer

Dear author, I have used your conformer recently and its effect is really good. I would like to ask you the following questions:

  1. I feel that the fitting speed of the model is slow, and the test accuracy rate has great fluctuation. Is there any other optimization suggestion?
  2. I have been paying attention to your Transformer. Why are the CSP and spatial channel attention used in Transformer abandoned here
    I would appreciate it if you could answer my questions?

How should 'Label' look?

Hey, I am new to EEG processing.
I am getting an error on 'label' and it seems that labels dont exist.
image

I am trying to make a separate file for labels and I would like to know the label should look like.

Thanks in advance

About the size of the model on a different dataset

Hi there, I read the paper and was excited to see a method that is lightweight and efficient. However, when I was trying on my own data set, the size of the model surged up to 600 Gb! Do you have any ideas why? Is it because my dataset comes with 128 channels and 2500 sampling points? I am not that familiar with how the code works so I wonder if you can help me :)

Result About the Dataset I 's Accuracy

I have run this code with your preprocessing code of EEG-transformer (getData.m) . However, each subject's result is not as good as your paper's. If this time you have got a new way of preprocessing the data, could you please tell me the difference? Thank you very much.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.