Giter VIP home page Giter VIP logo

compact-transformers's Introduction

Compact Transformers

Preprint Link: Escaping the Big Data Paradigm with Compact Transformers

By Ali Hassani[1]*, Steven Walton[1]*, Nikhil Shah[1], Abulikemu Abuduweili[1], Jiachen Li[1,2], and Humphrey Shi[1,2,3]

*Ali Hassani and Steven Walton contributed equal work

In association with SHI Lab @ University of Oregon[1] and UIUC[2], and Picsart AI Research (PAIR)[3]

model-sym

Other implementations & resources

[PyTorch blog]: check out our official blog post with PyTorch to learn more about our work and vision transformers in general.

[Keras]: check out Compact Convolutional Transformers on keras.io by Sayak Paul.

[vit-pytorch]: CCT is also available through Phil Wang's vit-pytorch, simply use pip install vit-pytorch

Abstract

With the rise of Transformers as the standard for language processing, and their advancements in computer vision, along with their unprecedented size and amounts of training data, many have come to believe that they are not suitable for small sets of data. This trend leads to great concerns, including but not limited to: limited availability of data in certain scientific domains and the exclusion of those with limited resource from research in the field. In this paper, we dispel the myth that transformers are “data hungry” and therefore can only be applied to large sets of data. We show for the first time that with the right size and tokenization, transformers can perform head-to-head with state-of-the-art CNNs on small datasets, often with bet-ter accuracy and fewer parameters. Our model eliminates the requirement for class token and positional embeddings through a novel sequence pooling strategy and the use of convolution/s. It is flexible in terms of model size, and can have as little as 0.28M parameters while achieving good results. Our model can reach 98.00% accuracy when training from scratch on CIFAR-10, which is a significant improvement over previous Transformer based models. It also outperforms many modern CNN based approaches, such as ResNet, and even some recent NAS-based approaches,such as Proxyless-NAS. Our simple and compact design democratizes transformers by making them accessible to those with limited computing resources and/or dealing with small datasets. Our method also works on larger datasets, such as ImageNet (82.71% accuracy with 29% parameters of ViT),and NLP tasks as well.

ViT-Lite: Lightweight ViT

Different from ViT we show that an image is not always worth 16x16 words and the image patch size matters. Transformers are not in fact ''data-hungry,'' as the authors proposed, and smaller patching can be used to train efficiently on smaller datasets.

CVT: Compact Vision Transformers

Compact Vision Transformers better utilize information with Sequence Pooling post encoder, eliminating the need for the class token while achieving better accuracy.

CCT: Compact Convolutional Transformers

Compact Convolutional Transformers not only use the sequence pooling but also replace the patch embedding with a convolutional embedding, allowing for better inductive bias and making positional embeddings optional. CCT achieves better accuracy than ViT-Lite and CVT and increases the flexibility of the input parameters.

Comparison

How to run

Install locally

Please make sure you're using the following PyTorch version:

torch==1.8.1
torchvision==0.9.1

Refer to PyTorch's Getting Started page for detailed instructions.

Using Docker

There's also a Dockerfile, which builds off of the PyTorch image (requires CUDA).

Training

We recommend starting with our faster version (CCT-2/3x2) which can be run with the following command. If you are running on a CPU we recommend this model.

python main.py \
       --dataset cifar10 \
       --model cct_2 \
       --conv-size 3 \
       --conv-layers 2 \
       path/to/cifar10

If you would like to run our best running models (CCT-6/3x1 or CCT-7/3x1) with CIFAR-10 on your machine, please use the following command.

python main.py \
       --dataset cifar10 \
       --model cct_6 \
       --conv-size 3 \
       --conv-layers 1 \
       --warmup 10 \
       --batch-size 64 \
       --checkpoint-path /path/to/checkpoint.pth \
       path/to/cifar10

Evaluation

You can use evaluate.py to evaluate the performance of a checkpoint.

python evaluate.py \
       --dataset cifar10 \
       --model cct_6 \
       --conv-size 3 \
       --conv-layers 1 \
       --checkpoint-path /path/to/checkpoint.pth \
       path/to/cifar10

Results

Type can be read in the format L/PxC where L is the number of transformer layers, P is the patch/convolution size, and C (CCT only) is the number of convolutional layers.

CIFAR-10 and CIFAR-100

Model Type Epochs CIFAR-10 CIFAR-100 # Params MACs
ViT-Lite 7/4 200 91.38% 69.75% 3.717M 0.239G
6/4 200 90.94% 69.20% 3.191M 0.205G
CVT 7/4 200 92.43% 73.01% 3.717M 0.236G
6/4 200 92.58% 72.25% 3.190M 0.202G
CCT 2/3x2 200 89.17% 66.90% 0.284M 0.033G
4/3x2 200 91.45% 70.46% 0.482M 0.046G
6/3x2 200 93.56% 74.47% 3.327M 0.241G
7/3x2 200 93.83% 74.92% 3.853M 0.275G
7/3x1 200 94.78% 77.05% 3.760M 0.947G
6/3x1 200 94.81% 76.71% 3.168M 0.813G
6/3x1 500 95.29% 77.31% 3.168M 0.813G

Randaugment + Mixup + CutMix

We trained the following using timm.

Model Epochs PE CIFAR-10 CIFAR-100
CCT-7/3x1 300 Learnable 96.53% 80.92%
1500 Sinusoidal 97.48% 82.72%
5000 Sinusoidal 98.00% -

ImageNet

Model Type Resolution Epochs Top-1 Accuracy # Params MACs
ViT 12/16 384 300 77.91% 86.8M 17.6G
CCT 14t/7x2 224 310 80.67% 22.36M 5.11G
14t/7x2 384 310 82.71% 22.51M 15.02G

Please note that we used Ross Wightman's ImageNet training script to train these.

NLP Results

Model Kernel size AGNews TREC # Params
CCT-2 1 93.45% 91.00% 0.238M
2 93.51% 91.80% 0.276M
4 93.80% 91.00% 0.353M
CCT-4 1 93.55% 91.80% 0.436M
2 93.24% 93.60% 0.475M
4 93.09% 93.00% 0.551M
CCT-6 1 93.78% 91.60% 3.237M
2 93.33% 92.20% 3.313M
4 92.95% 92.80% 3.467M
More models are being uploaded.

Citation

@article{hassani2021escaping,
	title        = {Escaping the Big Data Paradigm with Compact Transformers},
	author       = {Ali Hassani and Steven Walton and Nikhil Shah and Abulikemu Abuduweili and Jiachen Li and Humphrey Shi},
	year         = 2021,
	url          = {https://arxiv.org/abs/2104.05704},
	eprint       = {2104.05704},
	archiveprefix = {arXiv},
	primaryclass = {cs.CV}
}

compact-transformers's People

Contributors

alihassanijr avatar honghuis avatar stevenwalton avatar walleclipse avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.