Giter VIP home page Giter VIP logo

genericobjectdecoding's Introduction

Generic Object Decoding

This repository contains the data and demo codes for replicating results in our paper: Horikawa and Kamitani (2017) Generic decoding of seen and imagined objects using hierarchical visual features. Nature Communications 8:15037. The generic object decoding approach enabled decoding of arbitrary object categories including those not used in model training.

Dataset

Code

Demo programs for Matlab and Python are available in code/matlab and code/python, respectively. See README.md in each directory for the details.

Note

Visual images

For copyright reasons, we do not make the visual images used in our experiments publicly available. You can request us to share the stimulus images at https://forms.gle/ujvA34948Xg49jdn9.

Stimulus images used for higher visual area locazlier experiments in this study are available via https://forms.gle/c6HGatLrt7JtTGQk7.

Some of the test images were taken from ILSVRC 2012 training images. See data/stimulus_info_ImageNetTest.csv for the list of images included in ILSVRC 2012 training images.

genericobjectdecoding's People

Contributors

horikawa-t avatar mitsuaki avatar shuntaroaoki avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

genericobjectdecoding's Issues

Subjects' gender information

According to the paper, there were five subjects, one female and four males. I want to know the gender information of five subjects,
which one is the female?

Thank you for your help!

Get stimuli presentation order (for the brain data)?

Is there a simple way of getting which stimulus (image) was presented at which time point? I guess it is stored somewhere?

I apologize if this is an obvious question... but I'm not familiar with the package you are using. Or do you have documentation about how to use the brain decoding toolbox somewhere?

Thanks in advance!

error when running analysis_FeaturePrediction.py

Hi, I use python2 and Macbook to run the code, but I get the following error, could you have a look?

Many thanks.

File "analysis_FeaturePrediction.py", line 307, in <module>
    main()
  File "analysis_FeaturePrediction.py", line 75, in main
    data_feature = bdpy.BData(image_feature)
  File "/Users/Pan/Downloads/GenericObjectDecoding/code/python/env2/lib/python2.7/site-packages/bdpy/bdata/bdata.py", line 72, in __init__
    self.load(file_name, file_type)
  File "/Users/Pan/Downloads/GenericObjectDecoding/code/python/env2/lib/python2.7/site-packages/bdpy/bdata/bdata.py", line 558, in load
    self.__load_h5(load_filename)
  File "/Users/Pan/Downloads/GenericObjectDecoding/code/python/env2/lib/python2.7/site-packages/bdpy/bdata/bdata.py", line 700, in __load_h5
    md_keys = dat["metadata"]['key'][:].tolist()
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "/Users/Pan/Downloads/GenericObjectDecoding/code/python/env2/lib/python2.7/site-packages/h5py/_hl/group.py", line 167, in __getitem__
    oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "h5py/h5o.pyx", line 190, in h5py.h5o.open
KeyError: "Unable to open object (object 'metadata' doesn't exist)"

Stimulas images

Dear all
Can you share stimulus images dataset with me?
I can't download all the images from the experiment.Thank you.
Best regards

Regarding Createfigure.py

Hi
while running createfigure.py no result is coming out. Also it doesn't even plot any value is it because of some python version or something?

problem with bdpy package

Hey, I am trying to use this file with python 2.7.12. However, it seems like that the package provided named bdpy doesn't work well with your .mat document.

KeyError: "Unable to open object (Object 'metadata' doesn't exist)"

Excuse me,when I run the file "analysis_FeaturePrediction.py",There was an error.as follows:
`

Loading data
Traceback (most recent call last):
File "analysis_FeaturePrediction.py", line 306, in
main()
File "analysis_FeaturePrediction.py", line 74, in main
data_feature = bdpy.BData(image_feature)
File "/home/zjt2/.local/lib/python2.7/site-packages/bdpy/bdata/bdata.py", line 72, in init
self.load(file_name, file_type)
File "/home/zjt2/.local/lib/python2.7/site-packages/bdpy/bdata/bdata.py", line 558, in load
self.__load_h5(load_filename)
File "/home/zjt2/.local/lib/python2.7/site-packages/bdpy/bdata/bdata.py", line 700, in __load_h5
md_keys = dat["metadata"]['key'][:].tolist()
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/home/ilan/minonda/conda-bld/h5py_1490028130695/work/h5py/_objects.c:2846)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/home/ilan/minonda/conda-bld/h5py_1490028130695/work/h5py/_objects.c:2804)
File "/opt/anaconda/envs/python2/lib/python2.7/site-packages/h5py/_hl/group.py", line 169, in getitem
oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/home/ilan/minonda/conda-bld/h5py_1490028130695/work/h5py/_objects.c:2846)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/home/ilan/minonda/conda-bld/h5py_1490028130695/work/h5py/_objects.c:2804)
File "h5py/h5o.pyx", line 190, in h5py.h5o.open (/home/ilan/minonda/conda-bld/h5py_1490028130695/work/h5py/h5o.c:3740)

KeyError: "Unable to open object (Object 'metadata' doesn't exist)"

`
can you tell me some reasons?

four models

Excuse me, I can't understand why you used four types of computational models. CNN was enough to be used in these experiments and the results also demonstrated that CNN was more effective than others.

Stimulus_id and image_index in fMRI data

When I used pandas to extract training data from subject 1, I find two problem:

  1. Maybe there is a little mistake about category index. As you can see, when I sorted table by image_index, there are only 4 images in the first category and 4 images in the 151th category.
    image
  2. when I use tsv file to check the relationship between the training image index and the stimulus id, I find that there may be 4 images not in right order. the No.958 image in training data is the No.959 image in the tsv file and the No.1102 image in training data is the No.1103 image in the tsv file.

'improper assignment with rectangle empty matrix'

Using the matlab code, after I've setup all the environment and essential data files, and run the analysis_FeaturePrediction.m file. There is always a mistake. 'Improper assignment with rectangle empty matrix' happens in line 223 in the analysis_FeaturePrediction.m.

AttributeError: 'Dataset' object has no attribute 'value'

Hi,

Following the instruction in the ReadMe file, I downloaded the .h5 files from figshare and put in the "data" directory in code/python and then tried to run the analysis_FeaturePrediction.py script. This is what it gives me:

Loading data
Traceback (most recent call last):
  File "C:/PhD/Courses/Biosignal processing/Project/PythonProject/analysis_FeaturePrediction.py", line 306, in
    main()
  File "C:/PhD/Courses/Biosignal processing/Project/PythonProject/analysis_FeaturePrediction.py", line 67, in main
    data_all[sbj] = bdpy.BData(subjects[sbj][0])
  File "C:\PhD\Courses\Biosignal processing\Project\PythonProject\venv\lib\site-packages\bdpy\bdata\bdata.py", line 79, in init
    self.load(file_name, file_type)
  File "C:\PhD\Courses\Biosignal processing\Project\PythonProject\venv\lib\site-packages\bdpy\bdata\bdata.py", line 708, in load
    self.__load_h5(load_filename)
  File "C:\PhD\Courses\Biosignal processing\Project\PythonProject\venv\lib\site-packages\bdpy\bdata\bdata.py", line 874, in __load_h5
    if isinstance(v.value, np.ndarray):
AttributeError: 'Dataset' object has no attribute 'value'

Process finished with exit code 1

I tried to debug the code but couldn't get much out of it and would appreciate any help greatly.

Thank you.

Training Session Image Representation

Hi,
I use your dataset in my thesis. I'm confused about one thing. You wrote in the training session that each image is presented once, but shouldn't we accept it as showing 5 pictures twice in each run? In other words, for instance is it correct to consider fMRI signals as twice generated for the 456th index image?
image
experimental

analysis_Feature prediction.m error

Hello,

every time i try to run analysis_Feature prediction.m script. ,get_refdata function has an error because trainlabels argument must be a vector and not an array. Any ideas or help about this problem?

Thank you in advance!

warning and operand error

i got an warning error in analysis feature prediction merge results file.it is not a problem but the code was interrupted in subject2 cnn8.i have attached the screenshot(error.png)
error

weight_out_delay_time.m

Hello,when I run the MATLAB code analysis_FeaturePrediction.m,the error occurs in line251,when the predict_out function calls the weight_out_delay_time function,it requires the array size of 7516192768001x1750 (98000000.0GB),which is out of the max array size,I wonder is there any method to solve this problem?

the difference between fastcorr and corrcoef

Hi, recently I have been doing similar analysis on my own dataset. I wonder the difference between these two lines of code:

predAcc.image.perception = nanmean(diag(fastcorr(predPercept, testFeat)))
predAcc.image.perception = corrcoef(predPercept, testFeat)

By using corrcoef, the prediction accuracy on my data can achieve much better results than using fastcorr.

Finding all 1200 and 50 train and test images

Hi @mitsuaki , I am having trouble finding all the train and test images. I used the URL links and downloaded the images from the internet directly. The total images are however not 1200 now as many links were broken and some images were also corrupt. I have in total 838 images for training and testing. If possible, could you please share all the training and test images or the images_112.npz file ? It would help me a lot.

also, I discovered, that since my images are < 1200 , I might incur problem with the corresponding fMRI matlab files. Please advise. Thanks

Why does the length of data in 'ROI_VC' is different in two files?

Hello! Firstly, thank you for your excellent work! I have one question about the dataset.

I downloaded the data at figshare. There are two kinds of files: Subject1.h5 and Subject1_ImageNetTraining/ImageNetTest/Imagery.h5 (take subject1 for example). I use bdpy to select 'ROI_VC' data, but the length is 4466 and 3444 respectively. Why does this happen? Is this because one was using SPM to preprocess and the other was using fMRIPrep?

Thank you!

About ImageFeature file

i want to ask about more details for image feature file , how it's shape is (16234,) but the stimulus photos are only 1250 image

run export_volume.m error

When i run matlab scirpt 'export_volume.m', it need some *.nii files (./data/Subject%0d_SpaceTemplate.nii and ./data/Subject%0d_Func.nii) and output error:
File "./data/Subject1_SpaceTemplate.nii" does not exist.
How do i get these files? Looking forward to your reply.

num voxels of the data mismatch with config?

I just have another quick question:
Is this the correct way of extracting brain data from a particular ROI? If I run the following code, by checking the shape of X, the number of voxels in 'HVC' is 2049. However, according to the config file, num_voxel['HVC'] is 1000. Am I misunderstanding something?

import bdpy
import numpy as np 

subjects = config.subjects
rois = config.rois
num_voxel = config.num_voxel

sbj = 'Subject1'
roi = 'HVC'

data = bdpy.BData(subjects[sbj][0])
X = data.select(rois[roi])
print(np.shape(X)) 
print(num_voxel[roi])

Thank you very much in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.