Official Fast AutoAugment implementation in PyTorch.
- Fast AutoAugment learns augmentation policies using a more efficient search strategy based on density matching.
- Fast AutoAugment speeds up the search time by orders of magnitude while maintaining the comparable performances.
We do not open augmentation search codes at this moment, but it will be publicly open with our follow-up studies.
Search : 3.5 GPU Hours (1428x faster than AutoAugment), WResNet-40x2 on Reduced CIFAR-10
Model(CIFAR-10) | Baseline | Cutout | AutoAugment | Fast AutoAugment (transfer/direct) |
---|---|---|---|---|
Wide-ResNet-40-2 | 5.3 | 4.1 | 3.7 | 3.6 / 3.7 |
Wide-ResNet-28-10 | 3.9 | 3.1 | 2.6 | 2.7 / 2.7 |
Shake-Shake(26 2x32d) | 3.6 | 3.0 | 2.5 | 2.7 / 2.5 |
Shake-Shake(26 2x96d) | 2.9 | 2.6 | 2.0 | 2.0 / 2.0 |
Shake-Shake(26 2x112d) | 2.8 | 2.6 | 1.9 | 2.0 / 1.9 |
PyramidNet+ShakeDrop | 2.7 | 2.3 | 1.5 | 1.8 / 1.7 |
Model(CIFAR-100) | Baseline | Cutout | AutoAugment | Fast AutoAugment (transfer/direct) |
---|---|---|---|---|
Wide-ResNet-40-2 | 26.0 | 25.2 | 20.7 | 20.7 / 20.6 |
Wide-ResNet-28-10 | 18.8 | 28.4 | 17.1 | 17.8 / 17.5 |
Shake-Shake(26 2x96d) | 17.1 | 16.0 | 14.3 | 14.9 / 14.6 |
PyramidNet+ShakeDrop | 14.0 | 12.2 | 10.7 | 11.9 / 11.7 |
Search : 450 GPU Hours (33x faster than AutoAugment), ResNet-50 on Reduced ImageNet
Model | Baseline | AutoAugment | Fast AutoAugment |
---|---|---|---|
ResNet-50 | 23.7 / 6.9 | 22.4 / 6.2 | 21.4 / 5.9 |
ResNet-200 | 21.5 / 5.8 | 20.0 / 5.0 | 19.4 / 4.7 |
You can train network architectures on CIFAR-10 / 100 and ImageNet with our searched policies.
- fa_reduced_cifar10 : reduced CIFAR-10(4k images), WResNet-40x2
- fa_reduced_imagenet : reduced imagenet(50k images, 120 classes), ResNet-50
$ python train.py -c confs/wresnet40x2_cifar10_b512.yaml --aug fa_reduced_cifar10 --dataset cifar10
$ python train.py -c confs/wresnet40x2_cifar10_b512.yaml --aug fa_reduced_cifar10 --dataset cifar100
$ python train.py -c confs/wresnet28x10_cifar10_b512.yaml --aug fa_reduced_cifar10 --dataset cifar10
$ python train.py -c confs/wresnet28x10_cifar10_b512.yaml --aug fa_reduced_cifar10 --dataset cifar100
Note that we conducted experiments with ImageNet dataset using 8 machines with four V100 GPUs each.
$ python train.py -c confs/resnet50_imagenet_b4096.yaml --aug fa_reduced_imagenet --horovod
If you use any part of this code in your research, please cite our paper.
@article{lim2019fast,
title={Fast AutoAugment},
author={Lim, Sungbin and Kim, Ildoo and Kim, Taesup and Kim, Chiheon and Kim, Sungwoong},
journal={arXiv preprint arXiv:1905.00397},
year={2019}
}
- Ildoo Kim, [email protected]
- Sungbin Lim, [email protected]
- ResNet References
- (ResNet) Deep Residual Learning for Image Recognition
- Paper : https://arxiv.org/abs/1512.03385
- (ResNet) Identity Mappings in Deep Residual Networks
- Paper : https://arxiv.org/abs/1603.05027
- Codes
- (ResNet) Deep Residual Learning for Image Recognition
- (PyramidNet) Deep Pyramidal Residual Networks
- Paper : https://arxiv.org/abs/1610.02915
- Author's Code : https://github.com/dyhan0920/PyramidNet-PyTorch
- (Wide-ResNet)
- (Shake-Shake)
- ShakeDrop Regularization for Deep Residual Learning
- LARS : Large Batch Training of Convolutional Networks
- (ARS-Aug) Learning data augmentation policies using augmented random search
- Paper : https://arxiv.org/abs/1811.04768
- Author's Code : https://github.com/gmy2013/ARS-Aug
- AutoAugment
- https://pytorch.org/docs/stable/torchvision/models.html
- https://github.com/eladhoffer/convNet.pytorch/blob/master/preprocess.py
- Ray
- HyperOpt