Giter VIP home page Giter VIP logo

compact-transformers's Introduction

Compact Transformers

PWC

Preprint Link: Escaping the Big Data Paradigm with Compact Transformers

By Ali Hassani[1]*, Steven Walton[1]*, Nikhil Shah[1], Abulikemu Abuduweili[1], Jiachen Li[1,2], and Humphrey Shi[1,2,3]

*Ali Hassani and Steven Walton contributed equal work

In association with SHI Lab @ University of Oregon[1] and UIUC[2], and Picsart AI Research (PAIR)[3]

model-sym

Other implementations & resources

[PyTorch blog]: check out our official blog post with PyTorch to learn more about our work and vision transformers in general.

[Keras]: check out Compact Convolutional Transformers on keras.io by Sayak Paul.

[vit-pytorch]: CCT is also available through Phil Wang's vit-pytorch, simply use pip install vit-pytorch

Abstract

With the rise of Transformers as the standard for language processing, and their advancements in computer vision, along with their unprecedented size and amounts of training data, many have come to believe that they are not suitable for small sets of data. This trend leads to great concerns, including but not limited to: limited availability of data in certain scientific domains and the exclusion of those with limited resource from research in the field. In this paper, we dispel the myth that transformers are “data hungry” and therefore can only be applied to large sets of data. We show for the first time that with the right size and tokenization, transformers can perform head-to-head with state-of-the-art CNNs on small datasets, often with bet-ter accuracy and fewer parameters. Our model eliminates the requirement for class token and positional embeddings through a novel sequence pooling strategy and the use of convolution/s. It is flexible in terms of model size, and can have as little as 0.28M parameters while achieving good results. Our model can reach 98.00% accuracy when training from scratch on CIFAR-10, which is a significant improvement over previous Transformer based models. It also outperforms many modern CNN based approaches, such as ResNet, and even some recent NAS-based approaches,such as Proxyless-NAS. Our simple and compact design democratizes transformers by making them accessible to those with limited computing resources and/or dealing with small datasets. Our method also works on larger datasets, such as ImageNet (82.71% accuracy with 29% parameters of ViT),and NLP tasks as well.

ViT-Lite: Lightweight ViT

Different from ViT we show that an image is not always worth 16x16 words and the image patch size matters. Transformers are not in fact ''data-hungry,'' as the authors proposed, and smaller patching can be used to train efficiently on smaller datasets.

CVT: Compact Vision Transformers

Compact Vision Transformers better utilize information with Sequence Pooling post encoder, eliminating the need for the class token while achieving better accuracy.

CCT: Compact Convolutional Transformers

Compact Convolutional Transformers not only use the sequence pooling but also replace the patch embedding with a convolutional embedding, allowing for better inductive bias and making positional embeddings optional. CCT achieves better accuracy than ViT-Lite and CVT and increases the flexibility of the input parameters.

Comparison

How to run

Install locally

Our base model is in pure PyTorch and Torchvision. No extra packages are required. Please refer to PyTorch's Getting Started page for detailed instructions.

Here are some of the models that can be imported from src (full list available in Variants.md):

Model Resolution PE Name Pretrained Weights Config
CCT-7/3x1 32x32 Learnable cct_7_3x1_32 CIFAR-10/300 Epochs pretrained/cct_7-3x1_cifar10_300epochs.yml
Sinusoidal cct_7_3x1_32_sine CIFAR-10/5000 Epochs pretrained/cct_7-3x1_cifar10_5000epochs.yml
Learnable cct_7_3x1_32_c100 CIFAR-100/300 Epochs pretrained/cct_7-3x1_cifar100_300epochs.yml
Sinusoidal cct_7_3x1_32_sine_c100 CIFAR-100/5000 Epochs pretrained/cct_7-3x1_cifar100_5000epochs.yml
CCT-7/7x2 224x224 Sinusoidal cct_7_7x2_224_sine Flowers-102/300 Epochs pretrained/cct_7-7x2_flowers102.yml
CCT-14/7x2 224x224 Learnable cct_14_7x2_224 ImageNet-1k/300 Epochs pretrained/cct_14-7x2_imagenet.yml
384x384 cct_14_7x2_384 ImageNet-1k/Finetuned/30 Epochs finetuned/cct_14-7x2_imagenet384.yml
384x384 cct_14_7x2_384_fl Flowers102/Finetuned/300 Epochs finetuned/cct_14-7x2_flowers102.yml

You can simply import the names provided in the Name column:

from src import cct_14_7x2_384
model = cct_14_7x2_384(pretrained=True, progress=True)

The config files are provided both to specify the training settings and hyperparameters, and allow easier reproduction.

Please note that the models missing pretrained weights will be updated soon. They were previously trained using our old training script, and we're working on training them again with the new script for consistency.

You could even create your own models with different image resolutions, positional embeddings, and number of classes:

from src import cct_14_7x2_384, cct_7_7x2_224_sine
model = cct_14_7x2_384(img_size=256)
model = cct_7_7x2_224_sine(img_size=256, positional_embedding='sine')

Changing resolution and setting pretrained=True will interpolate the PE vector to support the new size, just like ViT.

These models are also based on experiments in the paper. You can create your own versions:

from src import cct_14
model = cct_14(arch='custom', pretrained=False, progress=False, kernel_size=5, n_conv_layers=3)

You can even go further and create your own custom variant by importing the class CCT.

All of these apply to CVT and ViT as well.

Training

timm is recommended for image classification training and required for the training script provided in this repository:

Distributed training

./dist_classification.sh $NUM_GPUS -c $CONFIG_FILE /path/to/dataset

You can use our training configurations provided in configs/:

./dist_classification.sh 8 -c configs/imagenet.yml --model cct_14_7x2_224 /path/to/ImageNet

Non-distributed training

python train.py -c configs/datasets/cifar10.yml --model cct_7_3x1_32 /path/to/cifar10

Models and config files

We've updated this repository and moved the previous training script and the checkpoints associated with it to examples/. The new training script here is just the timm training script. We've provided the checkpoints associated with it in the next section, and the hyperparameters are all provided in configs/pretrained for models trained from scratch, and configs/finetuned for fine-tuned models.

Results

Type can be read in the format L/PxC where L is the number of transformer layers, P is the patch/convolution size, and C (CCT only) is the number of convolutional layers.

CIFAR-10 and CIFAR-100

Model Pretraining Epochs PE CIFAR-10 CIFAR-100
CCT-7/3x1 None 300 Learnable 96.53% 80.92%
1500 Sinusoidal 97.48% 82.72%
5000 Sinusoidal 98.00% 82.87%

Flowers-102

Model Pre-training PE Image Size Accuracy
CCT-7/7x2 None Sinusoidal 224x224 97.19%
CCT-14/7x2 ImageNet-1k Learnable 384x384 99.76%

ImageNet

Model Type Resolution Epochs Top-1 Accuracy # Params MACs
ViT 12/16 384 300 77.91% 86.8M 17.6G
CCT 14/7x2 224 310 80.67% 22.36M 5.11G
14/7x2 384 310 + 30 82.71% 22.51M 15.02G

NLP

NLP results and instructions have been moved to nlp/.

Citation

@article{hassani2021escaping,
	title        = {Escaping the Big Data Paradigm with Compact Transformers},
	author       = {Ali Hassani and Steven Walton and Nikhil Shah and Abulikemu Abuduweili and Jiachen Li and Humphrey Shi},
	year         = 2021,
	url          = {https://arxiv.org/abs/2104.05704},
	eprint       = {2104.05704},
	archiveprefix = {arXiv},
	primaryclass = {cs.CV}
}

compact-transformers's People

Contributors

alihassanijr avatar honghuis avatar stevenwalton avatar walleclipse avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

compact-transformers's Issues

Output of the CCT classifier

Hi,

i am a little confuse about the output of the CCT. If I have a classification task with n possible classes, are the outputs the logits of each class? Thus to find the respective probabilities i have to apply a softmax function, or are the outputs the probabilities?

Thanks in advance

Pretrained cct_14_7x2_224 not available

Hi,
first of all thanks for your work. I would like to use the pertained model cct_14_7x2_224, however it seems unavailable right now. Could you please put it back online?
Thank you

The question about Vit-lite model

Hi, sorry to bother you. I have a question about Vit-lite model. In Vit-lite, the convolution layer in Tokenizer should be 0. But I found the conv_layers is defined as 1 in Vit-lite. So when I use Vit-lite, is the Command like this? python3 main.py --dataset cifar10 --epochs 300 --lr 0.001 --model vit_lite_7 --patch-size 4 --conv-size 3 --conv-layers 0 --warmup 10 --batch-size 128 ./cifar10

Text Masked Attention: TextTokenizer.forward() should return "new_mask"

Hello,

Love your work, it's coming in handy for deep learning on my NLP-like nucleotide data. In my use-case, I regularly pad sequences and in order to account for this padding, I would like to mask these "tokens" during the forward pass. However, when employing the masking tensor in the 'text' directory, I believe TextTokenizer.forward() should return the following (or similar):

return x, self.forward_mask(mask) if mask is not None else (x,mask)

By default, we return the argument mask rather than the tensor with the correct shape for the downstream mask required by the masked attention. This is OK for the embedding step as the forward step returns the same tensor (perhaps a bit of optimization could be of use here...), but this behavior causes errors in the tokenizer case.

I can make a PR if desired.

Again, thanks for your efforts. I have a few other questions but I'll save them for other issues.

what is the python version and torch version

Traceback (most recent call last): File "main.py", line 251, in <module> main() File "main.py", line 105, in main patch_size=args.patch_size) File "C:\Users\彭张智computational AI\Desktop\Compact-Transformers-main\src\cct.py", line 326, in cct_7 *args, **kwargs) File "C:\Users\彭张智computational AI\Desktop\Compact-Transformers-main\src\cct.py", line 286, in _cct *args, **kwargs) File "C:\Users\彭张智computational AI\Desktop\Compact-Transformers-main\src\cct.py", line 267, in __init__ *args, **kwargs) File "C:\Users\彭张智computational AI\Desktop\Compact-Transformers-main\src\cct.py", line 98, in __init__ nn.init.trunc_normal_(self.positional_emb, std=0.2) AttributeError: module 'torch.nn.init' has no attribute 'trunc_normal_'

CIFAR-100 dataset split

Hello. First, thanks for sharing your impressive work.

Since I want to train CCT models for CIFAR-100 for reproduction,
I'd like to ask you about how to split the dataset for train/val/test.

Thanks.

Question about the batch size

Hi, this work is awesome.
I just have one little question. The paper says the total batch size is 128 for CIFAR's and 4 GPU's were used in parallel.
That doesn't mean the total batch size is 128 * 4 = 512, does it?
DDP is for Imagenet, and non-distributed is for CIFAR, am I correct?

Thanks a ton :)

CCT for NLP

This is very exciting! Thank you! I'm interested in exploring this with NLP. Unfortunately, I'm running into some issues that seem to be related to expected tensor sizes when running one of your models. Do you have any extra NLP-related materials/training scripts that you can share?

Experiment Results

Hi, Ali Hassani
Thank you for sharing your amazing work!

If feasible, I would like to ask a question.
I wonder what training settings you use for CIFAR100?
Because when I use the default setting with resnet18, my best result is 58% differ from the paper which is 63.41%.
I have tried it twice and tweak the warm up to 5 or 10 epochs.
In addition, CCT-2 result is 65.74% in the paper is 66.90%.

Resnet18 pytorch result:
Accu Epoch 190 | 57.7
Accu Epoch 191 | 57.85
Accu Epoch 192 | 57.87
Accu Epoch 193 | 57.92
Accu Epoch 194 | 57.61
Accu Epoch 195 | 57.97
Accu Epoch 196 | 57.78
Accu Epoch 197 | 57.92
Accu Epoch 198 | 58.02
Accu Epoch 199 | 57.94
Accu Epoch 200 | 57.84

Thanks

CIFAR-100 HP

Thanks for this interesting work! I am just wondering if you can provide the command line for training the cifar-100 dataset?

HyperParameters of cifar

I want to reproduce your paper, however I find the weight decay and learning rate of cifar10 in *.yaml is different from paper's said, could your please tell me which parameter should I use?

Training cct_7_7x2_224 on imagenet

Hello,
have you tried to train this model on ImageNet?
I get only 45% accuracy with the same training hyperparameters as cct_14_7x2_224
thanks,
Ilias

Any intuitions on how to reduce Cuda Memory usage ?

I am working on applying cct_7 for another imgsize=128 example but my main problem is the CUDA Memory usage. Do you have any intuitions on parameters I could modify to reduce CUDA memory usage? Thanks!

cifar100 HP with RandAug

Hi, can you share the HP that your team trained it on RandAug with Timm? I have followed the same steps but still stuck on 76.9% and couldn't go any further (trained for 300 epochs should reach approximately 80%). Thanks for your help.

pretrrain

thank you for your nice work, but i can't find pretrain model, could you tell me where i can find it.
best wish for you

Order of `LayerNorm` & `Residual`

First of all, thanks for your amazing work!

And it seems that your TransformerEncoderLayer implementation is a bit different from the 'mainstream' implementations, because you create your residual link after the LayerNorm procedure:

src = src + self.drop_path(self.self_attn(self.pre_norm(src)))
src = self.norm1(src)
src2 = self.linear2(self.dropout1(self.activation(self.linear1(src))))
src = src + self.drop_path(self.dropout2(src2))

However, from the original paper of ViT and many other implementations, the residual link is created before the LayerNorm procedure:

src = src + self.drop_path(self.self_attn(self.pre_norm(src)))
src2 = self.norm1(src)
src2 = self.linear2(self.dropout1(self.activation(self.linear1(src2))))
src = src + self.drop_path(self.dropout2(src2))

I'm just wondering whether this is on purpose or some kind of 'typo'? Thanks in advance!

interpolation of imagenet

Hi, sorry to trouble you again. I have successfully your results on cifar10, and I plan to reproduce the results on ImageNet.
I follow your configs for imagenet and my result is lower than yours about 2%.

I would like to know the interpolation's type used in your paper for imagenet.
As most paper's only use Bicubic, and your paper use "random" for imagenet.

Thanks very much!

About CVT model

Hi, thank you for sharing your code. I have found an unclear issue with CVT. In your paper, you say that CVT only uses a linear layer as a projection layer and there is no Convolution, but it is inconsistent in your code. Indeed, CVT and VIT-lite used a conv block (Conv2D-ReLU-MaxPool2D) before applying these maps into Transformer's encoders.

Can you explain it little more?

Transformer Encoder Code Similarity

Hi @stevenwalton

What is the difference between your "TransformerEncoderLayer" and original Vit paper "Transformer" class?

Orginal VIT

class Transformer(nn.Module):
    def __init__(self, dim, depth, heads, dim_head, mlp_dim, dropout):
        super().__init__()
        self.layers = nn.ModuleList([])  # There are using Residual
        for _ in range(depth):
            self.layers.append(nn.ModuleList([
                Residual(PreNorm(dim, Attention(dim, heads=heads, dim_head=dim_head, dropout=dropout))),
                # Here they implemented Residual
                Residual(PreNorm(dim, FeedForward(dim, mlp_dim, dropout=dropout)))
            ]))

    def forward(self, x, mask=None):
        for attn, ff in self.layers:
            x = attn(x, mask=mask)  # Chnage in this part
            # embed()
            x = ff(x)
        return x

Your Transformer

class TransformerEncoderLayer(nn.Module):
    """
    Inspired by torch.nn.TransformerEncoderLayer and
    rwightman's timm package.
    """

    def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1,
                 attention_dropout=0.1, drop_path_rate=0.1):
        super(TransformerEncoderLayer, self).__init__()
        self.pre_norm = nn.LayerNorm(d_model)
        self.self_attn = Attention(dim=d_model, num_heads=nhead,
                                   attention_dropout=attention_dropout, projection_dropout=dropout)

        self.linear1 = nn.Linear(d_model, dim_feedforward)
        self.dropout1 = nn.Dropout(dropout)
        self.norm1 = nn.LayerNorm(d_model)
        self.linear2 = nn.Linear(dim_feedforward, d_model)
        self.dropout2 = nn.Dropout(dropout)

        self.drop_path = DropPath(drop_path_rate) if drop_path_rate > 0 else nn.Identity()

        self.activation = F.gelu

    def forward(self, src: torch.Tensor, mask=None, *args, **kwargs) -> torch.Tensor:
        src = src + self.drop_path(self.self_attn(self.pre_norm(src)))
        src = self.norm1(src)
        src2 = self.linear2(self.dropout1(self.activation(self.linear1(src))))
        src = src + self.drop_path(self.dropout2(src2))
        return src

Actually, I made some modifications in the original VIT and I want to add that modified part in your Transformer Encoder layer but your's code is different in terms of TransformerEncoderLayer.

Recommendation

Thank you for sharing this amazing work. I am currently attempting to apply your ideas to a specific problem with bigger images sized 128x128. Do you have any recommendations on how to improve the performances of your network on bigger images?

Question: Why is sequence pooling more effective than a class token?

Hi, I've read your excellent paper a few times now but I'm struggling on the intuition on why the sequence pooling approach should be substantially different or superior to a standard class token?

We've known for some time that the class token representations do not give good embedding representations, which is why it's common to pool the embedding representations through mean pooling - such as in Sentence Transformers.

But the use of another set of weights to compute attention weights for creating a pooled representation doesn't sound that different to what a class token should be doing.

A class token should in theory be able to pool representations from the previous layer.

What's the intuition, and what is different in the math or the architecture that would make this a more effective architecture, and lead to much efficient model sizes?

Been scratching my head about it for a number of days and thought I'd ask?

Thanks again for this great paper and library, and for enabling powerful transformers with a lot less parameters!

pretrained model on flowers102

Hi, Firstly, Thank for your great work.

I am trying to use pretrained model on flowers-102 and the accuracy I could achieve is around 8%. Could you provide some details for pretrained model on flowers102? Btw, other pretrained model is working in my code. My setting for flowers102

initialization of net

elif 'cct_7' in arch:
    model = getattr(models, arch)(img_size=224,
                                    num_classes=num_classes,
                                    positional_embedding='sine',
                                    n_conv_layers=2,
                                    kernel_size=7) 

elif 'cct_14' in arch:
    model = getattr(models, arch)(img_size=384,
                                    num_classes=num_classes,
                                    positional_embedding='learnable',
                                    n_conv_layers=2,
                                    kernel_size=7)

dataloader

elif args.data == 'flowers102':
  traindir = os.path.join(args.data_root, 'flower_data/train')
  valdir = os.path.join(args.data_root, 'flower_data/valid')
  testdir = os.path.join(args.data_root, 'flower_data/test')
  normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
                                   std=[0.229, 0.224, 0.225])

  train_set = datasets.ImageFolder(traindir, transforms.Compose([
      transforms.RandomResizedCrop(224),
      transforms.RandomHorizontalFlip(),
      transforms.ToTensor(),
      normalize
  ]))
  
  val_set = datasets.ImageFolder(valdir, transforms.Compose([
      transforms.Resize(256),
      transforms.CenterCrop(224),
      transforms.ToTensor(),
      normalize
  ]))

Thank you

Fine Tuning

Hi, I was wondering if I could use your finetuned model on my custom data! I already managed to run the Keras tutorial on my data, but I started from scratch! Is is possible to fine tune it? should I simply load your pretrained files?

Thank you in advance

pretrained on larger datasets on ImageNet?

I saw the results on the ImageNet dataset have been published. Is this result pre-trained with additional datasets? What are the training parameters? Epoch, batch size?

Question about the Performance of CCT6/3x2

Firstly, Thank for your great work.
I trained your CCT6 on google colab and the result is

[Epoch 200][Train][100] 	 Loss: 5.9658e-01 	 Top-1  95.77
[Epoch 200][Train][110] 	 Loss: 5.9784e-01 	 Top-1  95.70
[Epoch 200][Train][120] 	 Loss: 5.9848e-01 	 Top-1  95.67
[Epoch 200][Train][130] 	 Loss: 5.9910e-01 	 Top-1  95.61
[Epoch 200][Train][140] 	 Loss: 6.0073e-01 	 Top-1  95.55
[Epoch 200][Train][150] 	 Loss: 6.0226e-01 	 Top-1  95.46
[Epoch 200][Train][160] 	 Loss: 6.0283e-01 	 Top-1  95.46
[Epoch 200][Train][170] 	 Loss: 6.0209e-01 	 Top-1  95.50
[Epoch 200][Train][180] 	 Loss: 6.0230e-01 	 Top-1  95.50
[Epoch 200][Train][190] 	 Loss: 6.0174e-01 	 Top-1  95.52
[Epoch 200][Train][200] 	 Loss: 6.0133e-01 	 Top-1  95.53
[Epoch 200][Train][210] 	 Loss: 6.0177e-01 	 Top-1  95.50
[Epoch 200][Train][220] 	 Loss: 6.0121e-01 	 Top-1  95.51
[Epoch 200][Train][230] 	 Loss: 6.0156e-01 	 Top-1  95.49
[Epoch 200][Train][240] 	 Loss: 6.0092e-01 	 Top-1  95.52
[Epoch 200][Train][250] 	 Loss: 6.0064e-01 	 Top-1  95.54
[Epoch 200][Train][260] 	 Loss: 6.0068e-01 	 Top-1  95.54
[Epoch 200][Train][270] 	 Loss: 6.0083e-01 	 Top-1  95.52
[Epoch 200][Train][280] 	 Loss: 6.0039e-01 	 Top-1  95.54
[Epoch 200][Train][290] 	 Loss: 6.0092e-01 	 Top-1  95.52
[Epoch 200][Train][300] 	 Loss: 6.0109e-01 	 Top-1  95.52
[Epoch 200][Train][310] 	 Loss: 6.0116e-01 	 Top-1  95.53
[Epoch 200][Train][320] 	 Loss: 6.0130e-01 	 Top-1  95.52
[Epoch 200][Train][330] 	 Loss: 6.0153e-01 	 Top-1  95.51
[Epoch 200][Train][340] 	 Loss: 6.0245e-01 	 Top-1  95.46
[Epoch 200][Train][350] 	 Loss: 6.0211e-01 	 Top-1  95.47
[Epoch 200][Train][360] 	 Loss: 6.0238e-01 	 Top-1  95.47
[Epoch 200][Train][370] 	 Loss: 6.0208e-01 	 Top-1  95.49
[Epoch 200][Train][380] 	 Loss: 6.0174e-01 	 Top-1  95.51
[Epoch 200][Train][390] 	 Loss: 6.0167e-01 	 Top-1  95.51
[Epoch 200][Eval][0] 	 Loss: 6.7492e-01 	 Top-1  93.75
[Epoch 200][Eval][10] 	 Loss: 6.7268e-01 	 Top-1  93.39
[Epoch 200][Eval][20] 	 Loss: 6.7867e-01 	 Top-1  93.42
[Epoch 200][Eval][30] 	 Loss: 6.7773e-01 	 Top-1  93.42
[Epoch 200][Eval][40] 	 Loss: 6.8292e-01 	 Top-1  93.43
[Epoch 200][Eval][50] 	 Loss: 6.7983e-01 	 Top-1  93.41
[Epoch 200][Eval][60] 	 Loss: 6.8163e-01 	 Top-1  93.37
[Epoch 200][Eval][70] 	 Loss: 6.7865e-01 	 Top-1  93.43
[Epoch 200] 	 	 Top-1  93.44 	 	 Time: 149.51
Script finished in 149.51 minutes, best top-1: 93.48, final top-1: 93.44

The accuracy on training set is always higher than on validation set. The model is saturated at about 91% accuracy on validation set. Is it acceptable? or the model losses its generalization or something overfitting.

Difference between the paper and the code

Hi, Steven Walton
Thank you for sharing your amazing work!

If feasible, I would like to ask a question.
In the paper I found that "ViT, by default, uses a dropout for the MLP heads only with a probability of 0.1, and no attention dropout.Conversely, we do not use MLP dropout and only use attention dropout (p = 0.1)." However, in utils/transformers.py I found:

src_temp = self.activation(self.linear1(src))
src2 = self.linear2(self.dropout1(src_temp))
src = src + self.drop_path(self.dropout2(src2))

I wonder whether my understanding is wrong. I’d really appreciate it if you can let me know.

Thank you for your nice work | Question on Flowers dataset

Hi @alihassanijr,

Many thanks for your super interesting work, and sharing the elegant code with the community.

I am able to replicate your CIFAR-10 and CIFAR-100 results perfectly. But, there is a large gap when it comes to the Flowers dataset.

After running the following command:

python train.py -c configs/datasets/flowers102.yml --model cct_7_7x2_224_sine ./data/flowers102 --log-wandb

I am able to get only 62% accuracy. Please find the wandb report here. I am attaching the logs too:
output.log

The only change that I made to the code was to use the PyTorch dataloaders:

from torchvision.datasets import Flowers102
dataset_train = Flowers102(root=args.data_dir, split="train", download=True)
dataset_eval = Flowers102(root=args.data_dir, split="test", download=True)

I am sure that this might be some minor configuration issue for Flowers Dataset, as I am able to replicate the results on CIFAR-10 and CIFAR-100.

Thanks again, and it would be very kind of you if you could help me.

Thanks,
Joseph

Question about reproducing CIFAR-10 results

Hi, thank you for this very clean open-source implementation!

I've been testing out some modifications and noticed, even without the modifications, I'm not quite achieving the same accuracies as you report on CIFAR-10. I wondered if you had any suggestions about things that I might have missed.

Specifically, I tried to reproduce your results using one of your configs with the command python train.py -c configs/pretrained/cct_7-3x1_cifar10_300epochs.yml --model cct_7_3x1_32 datasets/CIFAR-10-images/ --log-wandb. I copied the dataset from this github repo. I couldn't find details on whether you use a train/validation split so then trained on all 50000 training images (i.e. with no validation set) and tested on all 10000 test images. I used your validate function for computing the test accuracy (by renaming the test image folder so that it is loaded as a validation set). After the full 300 epochs, I obtained 93.21% test accuracy, rather than the 96.53% that I think you report for this config in the README. Please let me know if there's anything I should do differently to obtain these results - perhaps using a different train/test split, computing the test loss in a different way, turning on EMA averaging, or if there's anything else that I might have missed.

I also tried computing test statistics in the same way after loading your pretrained cct_7_3x1_32 checkpoint (instead of training it) and got a lower test accuracy of 91.67%. So this makes me think that the issue is likely related to testing rather than training.

Thanks!

About image size

Thank you for your dedication.
You mentioned that changing images sizes will affect the sequence length. Can CCT take inputs with various image sizes? For example, if I've trained CCT with image_size = 256x256. Can I test this CCT on images with size 512x512?

Experiment Result CCT_7

Hi,

Thank you for the wonderful paper.
I have trained the CCT_7/3x1 by default and however, the result is differ from the paper.
CIFAR10: 93.67
CIFAR100: 73.15
I've tried it many times and It never reaches 94.72% for CIFAR10 and 76.67% for CIFAR100.
Could you please help me what makes it differ?

Thanks.

MAC

Hello, thanks very much for your work。
can you provide how to calculate the MAC of the computer vision models? e.g., CCT, VIT,..

validation set

Hi!
I started training your model on my data, and after some reserach I did not manage to understand how the validation set should be made in terms of directories! I know it's timm library, but I was wondering if you could help me.

My training data il like this: -/train/0/class1/ .....jpeg
class2/ .....
class3/....
class4/....

How and where should I position my validation folder?

Possibility to use CCT Transformer instead of CCT small models

Dear @stevenwalton,

I would like to use the configuration of the best model directly from the CCT Transformer class not calling the model function. I want to directly use the CCT Transformer class like VIT. Like below

https://github.com/lucidrains/vit-pytorch Please check the VIT model

v = ViT(
    image_size = 256,
    patch_size = 32,
    num_classes = 1000,
    dim = 1024,
    depth = 6,
    heads = 16,
    mlp_dim = 2048,
    dropout = 0.1,
    emb_dropout = 0.1
)

For CCT

"CCT": CCT (
            # GPU_ID=GPU_ID,
            # img_size=112,
            # num_classes=NUM_CLASS,
            positional_embedding='positional_embedding',
            n_conv_layers=2,
            kernel_size=3,
            patch_size=8
        ),

Image scaling and normalization

Hi,

if i want to use the model cct_14_7x2_224 pretrained on the Imagenet dataset, do i have to right normalize the images using the mean and standard dev of imagenet dataset, respectively (0.485, 0.456, 0.406) (0.229, 0.224, 0.225). Rigth? Before this normalization, do i have to make the rescaling of the pixel values from [0:255] to [0:1] ?

Thanks

Difference between Hybrid Vit, CCT and CVT

Hi, It's a nice work. But I still feel little bit confused about some points.
(1) To make the ViT still effective for small data, CCT and CVT modify the tokenization part. And CVT applies patching while CCT uses the convolution. Is my understanding correct?
(2) In the original Vit paper, they use Hybrid Vit to do the investigation and comparison. So, your method still belongs to Hybrid Vit and CCT is more similar to Hybrid Vit, right? Did you compare yours with the original Hybrid Vit? It seems that I didn't see it in your paper. BTW, I just want to make sure that the convolution (ResNet) of the original Hybrid Vit is also training along with Vit ,right?

Thanks for help!

Config for training Flowers SOTA

Hi,

I'm trying to figure out how to train your model to achieve the SOTA accuracy you report on Flowers102. It seems like using finetuned/cct_14-7x2_flowers102.yml will download the model with the 99.76% test accuracy you report, but I can't find any config files which actually train this model from scratch (or from e.g. an ImageNet checkpoint if you use that). Do you mind pointing me to any config files for this that I might have missed, or else to a description of the training procedure for your SOTA Flowers102 model so that I can try to reproduce it?

Thanks for your help,
Will

Creating custom model

Hello I have made a different file custom_vit.py under src folder and I have created my model.
However I am getting the following error

Traceback (most recent call last):
File "/home/iliask/PycharmProjects/Compact-Transformers/train.py", line 806, in
main()
File "/home/iliask/PycharmProjects/Compact-Transformers/train.py", line 357, in main
model = create_model(
File "/home/iliask/miniconda3/envs/SLVTP/lib/python3.9/site-packages/timm/models/factory.py", line 79, in create_model
raise RuntimeError('Unknown model (%s)' % model_name)
RuntimeError: Unknown model (custom_vit_2_4_32)

What is config for 224 image size?

How many image reductions should I have?
Is this right?

model = cct_7(img_size=im_size,
              num_classes=classes,
              positional_embedding='learnable',
              n_conv_layers=2,
              kernel_size=7,
              stride=2,
              padding=3,
              pooling_kernel_size=3,
              pooling_stride=2,
              pooling_padding=1)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.