Giter VIP home page Giter VIP logo

vgg-19-feature-extractor's Introduction

Fast Multi-threaded VGG 19 Feature Extractor

Overview

This allows you to extract deep visual features from a pre-trained VGG-19 net for collections of images in the millions. Images are loaded and preprocessed in parallel using multiple CPU threads then shipped to the GPU in minibatches for the forward pass through the net. Model weights are downloaded for you and loaded using Torch's loadcaffe library, so you don't need to compile Caffe.

The feature extractor computes a 4096 dimensional feature vector for every image that contains the activations of the hidden layer immediately before the VGG's object classifier. The activations are ReLU-ed and L2-normalized, which means they can be used as generic off-the-shelf features for tasks like classification or image similarity.

Example usage

You point it a tab separated file of (image_id, path to image on disk) e.g.

12      /home/username/images/12.jpg
342     /home/username/images/342.jpg
169     /home/username/images/169.jpg

specified by the -data flag, and it creates a tab separated file of (image_id, json encoded VGG vector) e.g.

12      [4096 dimensional vector]
342     [4096 dimensional vector]
169     [4096 dimensional vector]

specified by the -outFile flag.

-nThreads tells it how many CPU loader threads to use. -batchSize tells it how many images to put in each minibatch. The higher the batchSize, the higher the throughput, so I'd make this as large as your GPU memory will allow.

Example:

th main.lua -data [tab separated file of (image_id, path_to_image_on_disk)] -outFile out_vecs -nThreads 8 -batchSize 128

Requirements

Why should I care about pre-trained deep convnet features in the first place?

  • They're powerful and transferable: Razavian et. al. show that these kinds of deep features can be used off-the-shelf to beat highly tuned state-of-the-art methods on challenging fine grained classification problems. That is, you can use the same features that distinguish a boat from a motorcycle to accurately tell two species of birds apart, even when the differences between species are extremely subtle. They show superior results to traditional feature representations (like SIFT, HOG, visual bag of words).
  • They're interpretable: Zeiler and Fergus shows that the learned representations are far from a black box. They're actually quite interpretable: lower layers of the network learn filters that fire when they see color blobs, edges, lines, corners.

Middle layers see combinations of these lower level features, forming filters that respond to common textures.

Higher layers see combinations of these middle layers, forming filters that respond to object parts, and so on.

source

You can see the actual content of the image becoming increasingly explicit along the processing hierarchy.

  • They're cheap: You only need to do one forward pass on a pre-trained net to get them.
  • They're the go-to visual component in some pretty incredible new machine vision applications: like automatically describing images from raw pixels.

Or being able to embed images and words in a joint space then do vector arithmetic in the learned space:

Yep that's a multimodal vector describing a blue car minus the multimodal vector for the word "blue", plus the vector for "red" resulting in a vector that is near images of red cars.

When should I use these features?

Take advice from here (actually go read the entire course, it's amazing).

Thanks to

Soumith Chintala for the scalable loader starter code, Andrej Karpathy's course for teaching me about all this stuff in the first place .

vgg-19-feature-extractor's People

Contributors

coreylynch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vgg-19-feature-extractor's Issues

bgr values

The BGR values should be {104, 117, 124} instead of {123.68,116.779,103.939}. I think the order is reversed in the donkey.lua :-)

Program fails with empty newline at the end of input txt file

There's an issue with the robustness of the extractor where it fails when an extra empty new-line is added at the end of the input image list. There should be a check at dataset.lua, line 63.

Something along the lines of

for line in io.lines(imagePathFile) do
      if line != nil or line !='' then
            local id_path_label = line:split("\t")
            ffi.copy(id_data, id_path_label[1])
            ffi.copy(path_data, id_path_label[2])

            id_data = id_data + idMaxLength
            path_data = path_data + pathMaxLength

            if self.verbose and count % 10000 == 0 then
               xlua.progress(count, length)
            end;
            count = count + 1
      end;
     
end

If this check isn't there then the program will crash on the copy statement.

where is the normalization done ?

hello,

you mentioned that "The activations are ReLU-ed and L2-normalized". Could you please point me where did you apply the L2-normalization and how it is done ?

Error: bad argument #3 to 'copy' (value expected)

I'm trying to extract features for a single image using the default options and getting the following error. Please let me know if something is missing on my end.

{
  data : "/tmp/vgg19.txt"
  manualSeed : 2
  cache : "cache"
  nThreads : 4
  backend : "cudnn"
  batchSize : 128
  outFile : "/tmp/vgg19Feat.txt"
}
Writing vectors to: /tmp/vgg19Feat.txt
Starting worker with id: 2 seed: 4
Starting worker with id: 3 seed: 5
Starting worker with id: 1 seed: 3
Starting worker with id: 4 seed: 6
Creating test metadata
loading the large list of image paths to self.imagePath and caching
Creating test metadata
loading the large list of image paths to self.imagePath and caching
Creating test metadata
loading the large list of image paths to self.imagePath and caching
Creating test metadata
loading the large list of image paths to self.imagePath and caching
/home/ubuntu/torch/install/bin/luajit: ...e/ubuntu/torch/install/share/lua/5.1/threads/threads.lua:183: [thread 3 callback] /home/ubuntu/vgg-19-feature-extractor/dataset.lua:64: bad argument #3 to 'copy' (value expected)
stack traceback:
        [C]: in function 'copy'
        /home/ubuntu/vgg-19-feature-extractor/dataset.lua:64: in function '__init'
        /home/ubuntu/torch/install/share/lua/5.1/torch/init.lua:91: in function </home/ubuntu/torch/install/share/lua/5.1/torch/init.lua:87>
        [C]: in function 'dataLoader'
        /home/ubuntu/vgg-19-feature-extractor/donkey.lua:61: in main chunk
        [C]: in function 'dofile'
        /home/ubuntu/vgg-19-feature-extractor/data.lua:22: in function </home/ubuntu/vgg-19-feature-extractor/data.lua:16>
        [C]: in function 'xpcall'
        ...e/ubuntu/torch/install/share/lua/5.1/threads/threads.lua:234: in function 'callback'
        /home/ubuntu/torch/install/share/lua/5.1/threads/queue.lua:65: in function </home/ubuntu/torch/install/share/lua/5.1/threads/queue.lua:41>
        [C]: in function 'pcall'
        /home/ubuntu/torch/install/share/lua/5.1/threads/queue.lua:40: in function 'dojob'
        [string "  local Queue = require 'threads.queue'..."]:13: in main chunk
stack traceback:
        [C]: in function 'error'
        ...e/ubuntu/torch/install/share/lua/5.1/threads/threads.lua:183: in function 'dojob'
        ...e/ubuntu/torch/install/share/lua/5.1/threads/threads.lua:264: in function 'synchronize'
        ...e/ubuntu/torch/install/share/lua/5.1/threads/threads.lua:142: in function 'specific'
        ...e/ubuntu/torch/install/share/lua/5.1/threads/threads.lua:125: in function 'Threads'
        /home/ubuntu/vgg-19-feature-extractor/data.lua:11: in main chunk
        [C]: in function 'dofile'
        main.lua:18: in main chunk
        [C]: in function 'dofile'
        ...untu/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
        [C]: at 0x00406670

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.