Giter VIP home page Giter VIP logo

cornet's People

Contributors

eliulm avatar mschrimpf avatar qbilius avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cornet's Issues

TypeError: can't convert CUDA tensor to numpy.

Hello,

I tried to make a simple test with one image. Following the instruction I run:

python run.py test --model S --data_path images --output_path features --ngpus=1

The script is failing with on both windows and linux:

TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first

Here's my conda environment:

python 3.7.6
pandas 1.0.1
pytorch 1.4.0
numpy 1.18.1

I will be investigating on a solution for this, but I was wondering whether you've already some insight on the problem. Have the modules been tested with a more recent version of PyTorch? Thanks!

CORnet-R and CORnet-RT give the same activations for all images - IT output

I'm running this on Windows 10. I have run all 4 networks on multiple image sets and CORnets R and RT give the same activation values for all images in each set when the layer is 'IT' and the sublayer is 'output'. In other words, it's like nothing is being computed - the unit activations just stay the same and are repeated for the number of images in the stimulus set. Moreover, this is true across sets too, you just get the same 1x25088 vector repeated over and over. I'm executing it with the command:

python run.py test --model R --data_path Images/500Stimuli --output_path features/500Stimuli

Any insight as to what I'm doing wrong? This is not true of CORnets-Z or S. I notice that those are written differently (Class CORBlock_Z/S and a method def CORnet-Z/S as opposed to two classes CORblock_R/RT and CORnet_R/RT), is there a detail for how to run this I'm missing? Apologies for the multiple issues!

Where's the comments?

Who wrote this? Definitely not some sort of "real" software engineer. You call this MIT work.

Before you tell me to do this myself, look in the mirror and remove the GPL license and replace with MIT license.

fire complains despite success

Run command: python run.py test --restore_path cornet_z_epoch25.pth.tar - --model Z --data_path my_images -o .

Output:

100%|██████████| 136/136 [00:04<00:00, 30.35it/s]
Fire trace:
1. Initial component
2. Accessed property "test"
3. Called routine "test" (/Users/apurvaratanmurty/dev/CORnet/run.py:187)
4. ('Could not consume arg:', '--model')

Type:        NoneType
String form: None

Usage:       run.py test --restore_path cornet_z_epoch25.pth.tar -

However, the features file was created successfully.

CORnet doesn't complain but doesn't do anything either

Hey guys,

When I try to run the model out of the box nothing happens. I have realized that it works on some stimulus sets but not others. I figured this had to do with the images I want to get features being grayscale with only 1 channel. I modified them to have 3 identical channels but all I get is this, with my output path being empty. Any idea what the issue is? My apologies I'm new to all this!

issueScrnshot

A litter question about CORblock_S

I think the key point of this paper is that the CORblock_S can be learned for serval times in the one epoch.
So, how can you guarantee the conv2d in each for-loop is always the same one?
In my experience, when building a network using such a for-loop, all convolutions will with different weights.
It is equivalent to building a relatively long ordinary network.

Error when running test on CPU

The run script is failing when I don't specify the number of GPUs.

python run.py test --model S --data_path images --output_path features

Here's the stack trace:

Traceback (most recent call last):
  File "C:/Users/deral/University/Thesis/CORnet/run.py", line 350, in <module>
    fire.Fire(command=FIRE_FLAGS)
  File "C:\Users\deral\Anaconda3\envs\PyThesis\lib\site-packages\fire\core.py", line 138, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "C:\Users\deral\Anaconda3\envs\PyThesis\lib\site-packages\fire\core.py", line 471, in _Fire
    target=component.__name__)
  File "C:\Users\deral\Anaconda3\envs\PyThesis\lib\site-packages\fire\core.py", line 675, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "C:/Users/deral/University/Thesis/CORnet/run.py", line 226, in test
    model(im)
  File "C:\Users\deral\Anaconda3\envs\PyThesis\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "C:\Users\deral\Anaconda3\envs\PyThesis\lib\site-packages\torch\nn\modules\container.py", line 100, in forward
    input = module(input)
  File "C:\Users\deral\Anaconda3\envs\PyThesis\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "C:\Users\deral\Anaconda3\envs\PyThesis\lib\site-packages\torch\nn\modules\container.py", line 100, in forward
    input = module(input)
  File "C:\Users\deral\Anaconda3\envs\PyThesis\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "C:\Users\deral\Anaconda3\envs\PyThesis\lib\site-packages\torch\nn\modules\conv.py", line 345, in forward
    return self.conv2d_forward(input, self.weight)
  File "C:\Users\deral\Anaconda3\envs\PyThesis\lib\site-packages\torch\nn\modules\conv.py", line 342, in conv2d_forward
    self.padding, self.dilation, self.groups)
RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _thnn_conv2d_forward

Process finished with exit code 1

I try to debug the execution and I found that the checkpoint data is loaded correctly. At line

ckpt_data = torch.utils.model_zoo.load_url(url, map_location=map_location)
the weights in state_dict have their device correctly set to cpu. However, after load_state_dict is called, the weights in the model have their device set to cuda:0. Could the problem be there?

CORnet inconsistent vector length

Disclaimer: I do not know whether this is an issue with CORnet or with the THINGSvision wrapper for CORnet (see: https://github.com/ViCCo-Group/THINGSvision).

When running images through CORnet-z, the output layer for IT, as far as I understand from the CORnet preprint, is supposed to be 7x7x512 which when flattened would be a vector of length 25088. This is indeed what happens when apply_center_crop is set to True. However, when it is set to False, the output vectors of the same images (224 x 224 pixels), same layer/module, end up being of size 32768. This behavior does not appear to be layer specific, I e.g. also tested it for the V1 output layer and the vector length also depends on whether center crop is applied or not.

I am not quite sure if this is a bug or "normal" behavior that I then just do not quite understand.

training Z slow

I'm training CORnet-Z on a Titan X with 20 workers. One epoch takes about 5 hours which is way too long (i.e. cannot train the whole thing in 28 hours).

How to get Brain-Score?

After I get the output features of test image (eg. CORnet-S_decoder_avgpool_feats.npy), how to get the Brain-Score. Can you show me the detailed steps? Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.