• Stars
    star
    487
  • Rank 89,740 (Top 2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 3 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)

Compact Transformers

PWC

Preprint Link: Escaping the Big Data Paradigm with Compact Transformers

By Ali Hassani[1]*, Steven Walton[1]*, Nikhil Shah[1], Abulikemu Abuduweili[1], Jiachen Li[1,2], and Humphrey Shi[1,2,3]

*Ali Hassani and Steven Walton contributed equal work

In association with SHI Lab @ University of Oregon[1] and UIUC[2], and Picsart AI Research (PAIR)[3]

model-sym

Other implementations & resources

[PyTorch blog]: check out our official blog post with PyTorch to learn more about our work and vision transformers in general.

[Keras]: check out Compact Convolutional Transformers on keras.io by Sayak Paul.

[vit-pytorch]: CCT is also available through Phil Wang's vit-pytorch, simply use pip install vit-pytorch

Abstract

With the rise of Transformers as the standard for language processing, and their advancements in computer vision, along with their unprecedented size and amounts of training data, many have come to believe that they are not suitable for small sets of data. This trend leads to great concerns, including but not limited to: limited availability of data in certain scientific domains and the exclusion of those with limited resource from research in the field. In this paper, we dispel the myth that transformers are “data hungry” and therefore can only be applied to large sets of data. We show for the first time that with the right size and tokenization, transformers can perform head-to-head with state-of-the-art CNNs on small datasets, often with bet-ter accuracy and fewer parameters. Our model eliminates the requirement for class token and positional embeddings through a novel sequence pooling strategy and the use of convolution/s. It is flexible in terms of model size, and can have as little as 0.28M parameters while achieving good results. Our model can reach 98.00% accuracy when training from scratch on CIFAR-10, which is a significant improvement over previous Transformer based models. It also outperforms many modern CNN based approaches, such as ResNet, and even some recent NAS-based approaches,such as Proxyless-NAS. Our simple and compact design democratizes transformers by making them accessible to those with limited computing resources and/or dealing with small datasets. Our method also works on larger datasets, such as ImageNet (82.71% accuracy with 29% parameters of ViT),and NLP tasks as well.

ViT-Lite: Lightweight ViT

Different from ViT we show that an image is not always worth 16x16 words and the image patch size matters. Transformers are not in fact ''data-hungry,'' as the authors proposed, and smaller patching can be used to train efficiently on smaller datasets.

CVT: Compact Vision Transformers

Compact Vision Transformers better utilize information with Sequence Pooling post encoder, eliminating the need for the class token while achieving better accuracy.

CCT: Compact Convolutional Transformers

Compact Convolutional Transformers not only use the sequence pooling but also replace the patch embedding with a convolutional embedding, allowing for better inductive bias and making positional embeddings optional. CCT achieves better accuracy than ViT-Lite and CVT and increases the flexibility of the input parameters.

Comparison

How to run

Install locally

Our base model is in pure PyTorch and Torchvision. No extra packages are required. Please refer to PyTorch's Getting Started page for detailed instructions.

Here are some of the models that can be imported from src (full list available in Variants.md):

Model Resolution PE Name Pretrained Weights Config
CCT-7/3x1 32x32 Learnable cct_7_3x1_32 CIFAR-10/300 Epochs pretrained/cct_7-3x1_cifar10_300epochs.yml
Sinusoidal cct_7_3x1_32_sine CIFAR-10/5000 Epochs pretrained/cct_7-3x1_cifar10_5000epochs.yml
Learnable cct_7_3x1_32_c100 CIFAR-100/300 Epochs pretrained/cct_7-3x1_cifar100_300epochs.yml
Sinusoidal cct_7_3x1_32_sine_c100 CIFAR-100/5000 Epochs pretrained/cct_7-3x1_cifar100_5000epochs.yml
CCT-7/7x2 224x224 Sinusoidal cct_7_7x2_224_sine Flowers-102/300 Epochs pretrained/cct_7-7x2_flowers102.yml
CCT-14/7x2 224x224 Learnable cct_14_7x2_224 ImageNet-1k/300 Epochs pretrained/cct_14-7x2_imagenet.yml
384x384 cct_14_7x2_384 ImageNet-1k/Finetuned/30 Epochs finetuned/cct_14-7x2_imagenet384.yml
384x384 cct_14_7x2_384_fl Flowers102/Finetuned/300 Epochs finetuned/cct_14-7x2_flowers102.yml

You can simply import the names provided in the Name column:

from src import cct_14_7x2_384
model = cct_14_7x2_384(pretrained=True, progress=True)

The config files are provided both to specify the training settings and hyperparameters, and allow easier reproduction.

Please note that the models missing pretrained weights will be updated soon. They were previously trained using our old training script, and we're working on training them again with the new script for consistency.

You could even create your own models with different image resolutions, positional embeddings, and number of classes:

from src import cct_14_7x2_384, cct_7_7x2_224_sine
model = cct_14_7x2_384(img_size=256)
model = cct_7_7x2_224_sine(img_size=256, positional_embedding='sine')

Changing resolution and setting pretrained=True will interpolate the PE vector to support the new size, just like ViT.

These models are also based on experiments in the paper. You can create your own versions:

from src import cct_14
model = cct_14(arch='custom', pretrained=False, progress=False, kernel_size=5, n_conv_layers=3)

You can even go further and create your own custom variant by importing the class CCT.

All of these apply to CVT and ViT as well.

Training

timm is recommended for image classification training and required for the training script provided in this repository:

Distributed training

./dist_classification.sh $NUM_GPUS -c $CONFIG_FILE /path/to/dataset

You can use our training configurations provided in configs/:

./dist_classification.sh 8 -c configs/imagenet.yml --model cct_14_7x2_224 /path/to/ImageNet

Non-distributed training

python train.py -c configs/datasets/cifar10.yml --model cct_7_3x1_32 /path/to/cifar10

Models and config files

We've updated this repository and moved the previous training script and the checkpoints associated with it to examples/. The new training script here is just the timm training script. We've provided the checkpoints associated with it in the next section, and the hyperparameters are all provided in configs/pretrained for models trained from scratch, and configs/finetuned for fine-tuned models.

Results

Type can be read in the format L/PxC where L is the number of transformer layers, P is the patch/convolution size, and C (CCT only) is the number of convolutional layers.

CIFAR-10 and CIFAR-100

Model Pretraining Epochs PE CIFAR-10 CIFAR-100
CCT-7/3x1 None 300 Learnable 96.53% 80.92%
1500 Sinusoidal 97.48% 82.72%
5000 Sinusoidal 98.00% 82.87%

Flowers-102

Model Pre-training PE Image Size Accuracy
CCT-7/7x2 None Sinusoidal 224x224 97.19%
CCT-14/7x2 ImageNet-1k Learnable 384x384 99.76%

ImageNet

Model Type Resolution Epochs Top-1 Accuracy # Params MACs
ViT 12/16 384 300 77.91% 86.8M 17.6G
CCT 14/7x2 224 310 80.67% 22.36M 5.11G
14/7x2 384 310 + 30 82.71% 22.51M 15.02G

NLP

NLP results and instructions have been moved to nlp/.

Citation

@article{hassani2021escaping,
	title        = {Escaping the Big Data Paradigm with Compact Transformers},
	author       = {Ali Hassani and Steven Walton and Nikhil Shah and Abulikemu Abuduweili and Jiachen Li and Humphrey Shi},
	year         = 2021,
	url          = {https://arxiv.org/abs/2104.05704},
	eprint       = {2104.05704},
	archiveprefix = {arXiv},
	primaryclass = {cs.CV}
}

More Repositories

1

OneFormer

OneFormer: One Transformer to Rule Universal Image Segmentation, arxiv 2022 / CVPR 2023
Jupyter Notebook
1,391
star
2

Versatile-Diffusion

Versatile Diffusion: Text, Images and Variations All in One Diffusion Model, arXiv 2022 / ICCV 2023
Python
1,300
star
3

Neighborhood-Attention-Transformer

Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022
Python
1,037
star
4

Prompt-Free-Diffusion

Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models, arxiv 2023 / CVPR 2024
Python
719
star
5

Matting-Anything

Matting Anything Model (MAM), an efficient and versatile framework for estimating the alpha matte of any instance in an image with flexible and interactive visual or linguistic user prompt guidance.
Python
595
star
6

Cross-Scale-Non-Local-Attention

PyTorch code for our paper "Image Super-Resolution with Cross-Scale Non-Local Attention and Exhaustive Self-Exemplars Mining" (CVPR2020).
Python
401
star
7

Pyramid-Attention-Networks

[IJCV] Pyramid Attention Networks for Image Restoration: new SOTA results on multiple image restoration tasks: denoising, demosaicing, compression artifact reduction, super-resolution
Python
382
star
8

NATTEN

Neighborhood Attention Extension. Bringing attention to a neighborhood near you!
Cuda
333
star
9

Smooth-Diffusion

Smooth Diffusion: Crafting Smooth Latent Spaces in Diffusion Models arXiv 2023 / CVPR 2024
Python
287
star
10

VCoder

VCoder: Versatile Vision Encoders for Multimodal Large Language Models, arXiv 2023 / CVPR 2024
Python
245
star
11

Rethinking-Text-Segmentation

[CVPR 2021] Rethinking Text Segmentation: A Novel Dataset and A Text-Specific Refinement Approach
Python
241
star
12

Agriculture-Vision

[CVPR 2020 & 2021 & 2022 & 2023] Agriculture-Vision Dataset, Prize Challenge and Workshop: A joint effort with many great collaborators to bring Agriculture and Computer Vision/AI communities together to benefit humanity!
199
star
13

Self-Similarity-Grouping

Self-similarity Grouping: A Simple Unsupervised Cross Domain Adaptation Approach for Person Re-identification (ICCV 2019, Oral)
Python
188
star
14

FcF-Inpainting

[WACV 2023] Keys to Better Image Inpainting: Structure and Texture Go Hand in Hand
Jupyter Notebook
169
star
15

Decoupled-Classification-Refinement

Revisiting RCNN: On Awakening the Classification Power of Faster RCNN (ECCV 2018)
Python
167
star
16

Convolutional-MLPs

[Preprint] ConvMLP: Hierarchical Convolutional MLPs for Vision, 2021
Python
163
star
17

3D-Point-Cloud-Learning

130
star
18

CuMo

CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts
Python
130
star
19

Forget-Me-Not

Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models, 2023
Python
107
star
20

VMFormer

[Preprint] VMFormer: End-to-End Video Matting with Transformer
Python
106
star
21

Semi-Supervised-Transfer-Learning

[CVPR 2021] Adaptive Consistency Regularization for Semi-Supervised Transfer Learning
Jupyter Notebook
101
star
22

SGL-Retinal-Vessel-Segmentation

[MICCAI 2021] Study Group Learning: Improving Retinal Vessel Segmentation Trained with Noisy Labels: New SOTA on both DRIVE and CHASE_DB1.
Jupyter Notebook
101
star
23

StyleNAT

New flexible and efficient image generation framework that sets new SOTA on FFHQ-256 with FID 2.05, 2022
Python
97
star
24

Unsupervised-Domain-Adaptation-with-Differential-Treatment

[CVPR 2020] Differential Treatment for Stuff and Things: A Simple Unsupervised Domain Adaptation Method for Semantic Segmentation
Python
88
star
25

Text2Video-Zero-sd-webui

Python
79
star
26

GFR-DSOD

Improving Object Detection from Scratch via Gated Feature Reuse (BMVC 2019)
Python
65
star
27

SH-GAN

[WACV 2023] Image Completion with Heterogeneously Filtered Spectral Hints
Python
62
star
28

VIM

Python
54
star
29

UltraSR-Arbitrary-Scale-Super-Resolution

[Preprint] UltraSR: Spatial Encoding is a Missing Key for Implicit Image Function-based Arbitrary-Scale Super-Resolution, 2021
53
star
30

Any-Precision-DNNs

Any-Precision Deep Neural Networks (AAAI 2021)
Python
44
star
31

Horizontal-Pyramid-Matching

Horizontal Pyramid Matching for Person Re-identification (AAAI 2019)
Python
39
star
32

Pseudo-IoU-for-Anchor-Free-Object-Detection

Pseudo-IoU: Improving Label Assignment in Anchor-Free Object Detection
Python
30
star
33

Human-Object-Interaction-Detection

25
star
34

CompFeat-for-Video-Instance-Segmentation

CompFeat: Comprehensive Feature Aggregation for Video Instance Segmentation (AAAI 2021)
19
star
35

OneFormer-Colab

[Colab Demo Code] OneFormer: One Transformer to Rule Universal Image Segmentation.
Python
13
star
36

DiSparse-Multitask-Model-Compression

[CVPR 2022] DiSparse: Disentangled Sparsification for Multitask Model Compression
Jupyter Notebook
13
star
37

Diffusion-Driven-Test-Time-Adaptation-via-Synthetic-Domain-Alignment

Everything to the Synthetic: Diffusion-driven Test-time Adaptation via Synthetic-Domain Alignment
Python
12
star
38

Interpretable-Visual-Reasoning

[ICCV 2021] Interpretable Visual Reasoning via Induced Symbolic Space
9
star
39

Mask-Selection-Networks

[CVPR 2021] Youtube-VIS 2021 3rd place, [CVPR 2020] winner DAVIS 2020. Code for mask selection based methods.
6
star
40

Activity-Recognition

5
star
41

Boosted-Dynamic-Networks

Boosted Dynamic Neural Networks, AAAI 2023
Python
4
star
42

Aneurysm-Segmentation-with-Multi-Teacher-Pseudo-Labels

1
star