Giter VIP home page Giter VIP logo

msdnet's Introduction

MSDNet

This repository provides the code for the paper Multi-Scale Dense Networks for Resource Efficient Image Classification.

Update on April 3, 2019 -- PyTorch implementation released!

A PyTorch implementation of MSDNet can be found from here.

Introduction

This paper studies convolutional networks that require limited computational resources at test time. We develop a new network architecture that performs on par with state-of-the-art convolutional networks, whilst facilitating prediction in two settings: (1) an anytime-prediction setting in which the network's prediction for one example is progressively updated, facilitating the output of a prediction at any time; and (2) a batch computational budget setting in which a fixed amount of computation is available to classify a set of examples that can be spent unevenly across 'easier' and 'harder' examples.

Figure 1: MSDNet layout (2D).

Figure 2: MSDNet layout (3D).

Results

(a) anytime-prediction setting

Figure 3: Anytime prediction on ImageNet.

(b) batch computational budget setting

Figure 4: Prediction under batch computational budget on ImageNet.

Figure 5: Random example images from the ImageNet classes Red wine and Volcano. Top row: images exited from the first classification layer of an MSDNet with correct prediction; Bottom row: images failed to be correctly classified at the first classifier but were correctly predicted and exited at the last layer.

Usage

Our code is written under the framework of Torch ResNet (https://github.com/facebook/fb.resnet.torch). The training scripts come with several options, which can be listed with the --help flag.

th main.lua --help

Configuration

In all the experiments, we use a validation set for model selection. We hold out 5000 training images on CIFAR, and 50000 images on ImageNet as the validation set.

Training recipe

Train an MSDNet with 10 classifiers attached to every other layer for anytime prediction:

th main.lua -netType msdnet -dataset cifar10 -batchSize 64 -nEpochs 300 -nBlocks 10 -stepmode even -step 2 -base 4

Train an MSDNet with 7 classifiers with the span linearly increases for efficient batch computation:

th main.lua -netType msdnet -dataset cifar10 -batchSize 64 -nEpochs 300 -nBlocks 7 -stepmode lin_grow -step 1 -base 1

Pre-trained ImageNet Models

  1. Download model checkpoints and the validation set indeces.

  2. Testing script: th main.lua -dataset imagenet -testOnly true -resume <path-to-.t7-model> -data <path-to-image-net-data> -gen <path-to-validation-set-indices>

FAQ

  1. How to calculate the FLOPs (or mul-add op) of a model?

We strongly recommend doing it automatically. Please refer to the op-counter project (LuaTorch), or the script in ConDenseNet (PyTorch). The basic idea of these op counters is to add a hook before the forward pass of a model.

msdnet's People

Contributors

gaohuang avatar taineleau avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

msdnet's Issues

Pruning logic

According to the paper,

One simple strategy to reduce the size of the network is by splitting it into S blocks along the depth dimension, and only keeping the coarsest (S - i + 1) scales in the ith block.

How is the network split it into S blocks? According to the pruning logic in JointTrainContainer.lua,

elseif opt.prune == 'max' then
         local interval = torch.ceil(layer_all/opt.nScales)
         inScales = opt.nScales - torch.floor((math.max(0, layer_tillnow -2))/interval)
         outScales = opt.nScales - torch.floor((layer_tillnow -1)/interval)

Consider a toy example with 4 blocks, linearly increasing span, step = 1, base = 1 and a maximum of 3 scales. The number of layers with input scales 3, 2 and 1 are 5, 4 and 1 respectively. Why is the distribution of the number of layers in each split uneven?

Error while training

stack traceback:
	[C]: in function 'read'
	/home/manan/torch/install/share/lua/5.1/torch/File.lua:351: in function </home/manan/torch/install/share/lua/5.1/torch/File.lua:245>
	[C]: in function 'read'
	/home/manan/torch/install/share/lua/5.1/torch/File.lua:351: in function 'readObject'
	/home/manan/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
	/home/manan/torch/install/share/lua/5.1/torch/File.lua:409: in function 'load'
	/home/manan/MSDNet/datasets/cifar10-gen.lua:23: in function 'convertToTensor'
	/home/manan/MSDNet/datasets/cifar10-gen.lua:52: in function 'exec'
	./datasets/init.lua:29: in function 'create'
	./dataloader.lua:31: in function 'create'
	main.lua:44: in main chunk
	[C]: in function 'dofile'
	/home/manan/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
	[C]: at 0x00405d50

Conversion of MSDNet to Keras

Hey! I am currently researching the opportunity of using your MSDNet network architecture as the backbone of our proposed ConvNet. Are you familiar if there have been made any efforts in converting your model to a format compatible with Keras yet?

How to terminate the programme while testing given a threshold?

Hi! Thanks for sharing wonderful code.

I wonder how to terminate the programme while testing given a threshold.
Does it automatically or manually in the code?

I just saw the forward process and did not find any other procedure to stop the algorithm once we already get a high confidence classification results.

Thanks.

Environment for training

Hi.

I want to reproduce the result of ImageNet, but before I have finished downloading pre-trained ImageNet model, the data has gone.
After that I have tried to train the Imagenet Model and trying reproduction, my local environment(2* GTX1080ti, Xeon-v4) seems to be insufficient.
I would like you to show the environment for training ImageNet model(batch size=256, training after 80 epoch loops).

Thanks in advance.

Why is the number of output channels of the first convolution in the first layer 32?

According to Appendix A in the paper, for the CIFAR datasets, the number of output channels of the three scales is set to 6, 12 and 24 respectively. However, nChannels is set to 32 in msdnet.lua if initChannels is not set in the arguments. This means that the number of output channels in the first layer for the three scales is 32, 64 and 128 respectively according to the default growth rate 1-2-4-4. Why is there a difference between the implementation details in the paper and the code?

Reproduce MSDNet paper's results

Hi @gaohuang,

I have been trying lately to reproduce the results from the MSDNet paper, but I could not get the pretrained model's accuracy. I would very much appreciate if you could check if the following setup (for k=4) is correct:

th main.lua -dataset imagenet -data <imagenet_dir> -gen gen -nGPU 4 -nBlocks5 -base 7 -step 4 -batchSize 256 -nEpochs 90

vs.

th main.lua -dataset imagenet -data <imagenet_dir> -gen gen -nGPU 4 -nBlocks5 -base 7 -step 4 -batchSize 256 -retrain msdnet--step\=4--block\=5 --growthRate\=16.t7 -testOnly true 

The discrepancy in accuracy ranges from -1% (1st classifier) to -6% (last classifier) in top-1.

According to your paper:

On ImageNet, we use MSDNets with four scales, and the ith classifier operates on the (k×i+3)th layer (with i=1, . . . , 5 ), where k=4, 6 and 7. For simplicity, the losses of all the classifiers are weighted equally during training.

[...]

We apply the same optimization scheme to the ImageNet dataset, except that we increase the mini-batch size to 256, and all the models are trained for 90 epochs with learning rate drops after 30 and 60 epochs.

Am I missing something in the training parameters for Imagenet?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.