Giter VIP home page Giter VIP logo

Comments (15)

shelhamer avatar shelhamer commented on April 27, 2024 5

You can access the features computed in any layer through CaffeNet.blobs(), which is a list of blobs, each with a property data that's an ndarray with the layer's features. Try dir(net.blobs()[0]) to see the list of properties.

net.Forward(input_blobs, output_blobs) # do forward pass to compute features
output = net.blobs()[-1].data # = numpy ndarray of the output blob data
midlevel = net.blobs()[10].data # = numpy ndarray of the 11th blob data

This will be more polished with #112 to allow indexing by layer name like in DeCAF. Once the wrapper matures a little a demo notebook will be added.

from caffe.

shelhamer avatar shelhamer commented on April 27, 2024 4

As of v0.99 the new python wrapper is in master.

You can access the features computed in any layer through caffe.Net.blobs, which is an ordered dictionary of layers, each with a property data that's an ndarray with the layer's activations. Try dir(net.blobs['fc7']) to see the list of properties.

net.Forward(input_blobs, output_blobs) # do forward pass to compute features
conv2 = net.blobs['conv2'].data # = numpy ndarray of the conv2 layer
fc7 = net.blobs['fc7'].data # = numpy ndarray of the fc7 layer

from caffe.

jackiechensuper avatar jackiechensuper commented on April 27, 2024

I know there is a dump_network cpp file, how I can use it in Python, thanks for answering!

from caffe.

zyan0 avatar zyan0 commented on April 27, 2024

I need a tutorial of how to extract features too. Thanks for answering!

from caffe.

shelhamer avatar shelhamer commented on April 27, 2024

Extracting the full network through the python wrapper will be added by pull request #11 which we hope to merge soon.

from caffe.

shelhamer avatar shelhamer commented on April 27, 2024

The network blobs and parameters are now exposed through the python wrapper by the merge of #11 .

from caffe.

junwang4 avatar junwang4 commented on April 27, 2024

Can anyone provide an example of how to extract the feature of a specific layer in python?

In DECAF, for example, if I want to get the feature in layer 6, I use this:
feature = net.feature('fc6_cudanet_out')

from caffe.

sharathchandra92 avatar sharathchandra92 commented on April 27, 2024

Hi, whats the matlab alternative to this? I am trying to get the features on matlab, but get weights seems to be giving the weights alone and a forward pass is giving the final layer predictions. Can I get the intermediate features?

from caffe.

sguada avatar sguada commented on April 27, 2024

You can modify the prototxt and remove the top layers to get middle
features. There is not a direct way to do it in the Matlab wrapper yet.

Sergio

2014-04-20 7:41 GMT-07:00 Sharath Chandra Guntuku [email protected]
:

Hi, whats the matlab alternative to this? I am trying to get the features
on matlab, but get weights seems to be giving the weights alone and a
forward pass is giving the final layer predictions. Can I get the
intermediate features?


Reply to this email directly or view it on GitHubhttps://github.com//issues/20#issuecomment-40896278
.

from caffe.

shelhamer avatar shelhamer commented on April 27, 2024

Exposing all the blobs in MATLAB as they are in python would make a good pull request!

from caffe.

sharathchandra92 avatar sharathchandra92 commented on April 27, 2024

Just wanted to confirm, the layer DECAF6 mentioned in experiments of the paper: DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition is referring to fc6 after relu and dropout has been applied right? Or it is without the relu/dropout?

from caffe.

junwang4 avatar junwang4 commented on April 27, 2024

I think layer fc6 should be after layer ReLU and Dropout because there are no negative values. But the log data shows (so somewhat confusing for me too):
...
Creating Layer fc6
Creating Layer relu6
Creating Layer drop6
Creating Layer fc7
...

In addition, I have two related questions: (1) How can we get the features before the application of ReLU. When I use another deep learning package (Overfeat), it seems that I can have much better classification accuracy if I use the layer before ReLU in Overfeat. (2) Should the output from layer Dropout be different every time when I run the feature extraction (or it is different only when running the training function instead of feature extraction)? Is there a way to get different Dropout output if I just run the feature extraction function?

from caffe.

sharathchandra92 avatar sharathchandra92 commented on April 27, 2024

@junwang4 I think the answers to your questions are as follows:

  1. In the prototxt file, if you remove the layers which say relu and dropout after fc6, you should get the features before ReLu. So using Overfeat, these features give you better accuracy for which task exactly?
  2. At training time, half of the activations are set randomly to 0 and at test time, all activations are multiplied by 0.5. So for a pretrained model, I dont think there'll be a difference in the features.

So DECAF6 is relu activations after fc6 right? It'd be great if @Yangqing or @jeffdonahue (as they're the authors of the paper) could confirm this. Thanks :)

from caffe.

junwang4 avatar junwang4 commented on April 27, 2024

Thanks, Sharath! I will give it a try regarding your suggestions of removing the layers relu and dropout after fc6. As to Overfeat, I got the idea of using the layer before relu from: http://fastml.com/yesterday-a-kaggler-today-a-kaggle-master-a-wrap-up-of-the-cats-and-dogs-competition/

[ UPDATE ]
I got a chance to test the performance of using or not using ReLU on the Kaggle dog-cat training data. It turns out that actually, there was no significant difference between using relu and without using it. I tested it with Caffe and Overfeat (and svm and logistic regression), and all showed no big difference on the dog-cat data.

from caffe.

651juan avatar 651juan commented on April 27, 2024

Hi, whats the matlab alternative to this? I am trying to get the features on matlab, but get weights seems to be giving the weights alone and a forward pass is giving the final layer predictions. Can I get the intermediate features?

This is now possible. I was searching on how to do this and I managed by executing the following command:
"net.blobs('fc7').get_data()" where fc7 is the layer name.

from caffe.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.