MobileNet
A tensorflow implementation of Google's MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Base Module
Accuracy on ImageNet-2012 Validation Set
Model | Width Multiplier | Preprocessing | Accuracy-Top1 | Accuracy-Top5 |
---|---|---|---|---|
MobileNet | 1.0 | Same as Inception | 66.51% | 87.09% |
Click to download the pretrained weight.
Loss
Time Benchmark
Environment: Ubuntu 16.04 LTS, Tensorflow 1.0.1 (native pip install).
Device | Forward | Forward-Backward | Remark |
---|---|---|---|
Xeon E3-1231 v3, 4 Cores @ 3.40GHz | 52ms / img | 503ms / img | Without instruction set acceleration |
NVIDIA GTX1060 | 3ms / img | 16ms / img | CUDA8.0, CUDNN5.1 |
Usage
Train on Imagenet
-
Prepare imagenet data. Please refer to Google's tutorial for training inception.
-
Modify './script/train_mobilenet_on_imagenet.sh' according to your environment.
bash ./script/train_mobilenet_on_imagenet.sh
Benchmark speed
python time_benchmark.py
Trouble Shooting
- About the MobileNet model size
According to the paper, MobileNet has 3.3 Million Parameters, which does not vary based on the input resolution. It means that the number of final model parameters should be larger than 3.3 Million, because of the fc layer.
When using RMSprop training strategy, the checkpoint file size should be almost 3 times as large as the model size, because of some auxiliary parameters used in RMSprop. You can use the inspect_checkpoint.py to figure it out.
- Pretrained weight
Welcome to share if you had trained a better model.
TODO
- Train on Imagenet
- Add Width Multiplier Hyperparameters
- Report training result
- Intergrate into object detection task
Reference
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications