• Stars
    star
    10,251
  • Rank 3,386 (Top 0.07 %)
  • Language
    Jupyter Notebook
  • License
    Apache License 2.0
  • Created about 4 years ago
  • Updated 6 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Vision Transformer and MLP-Mixer Architectures

In this repository we release models from the papers

The models were pre-trained on the ImageNet and ImageNet-21k datasets. We provide the code for fine-tuning the released models in JAX/Flax.

The models from this codebase were originally trained in https://github.com/google-research/big_vision/ where you can find more advanced code (e.g. multi-host training), as well as some of the original training scripts (e.g. configs/vit_i21k.py for pre-training a ViT, or configs/transfer.py for transfering a model).

Table of contents:

Colab

Below Colabs run both with GPUs, and TPUs (8 cores, data parallelism).

The first Colab demonstrates the JAX code of Vision Transformers and MLP Mixers. This Colab allows you to edit the files from the repository directly in the Colab UI and has annotated Colab cells that walk you through the code step by step, and lets you interact with the data.

https://colab.research.google.com/github/google-research/vision_transformer/blob/main/vit_jax.ipynb

The second Colab allows you to explore the >50k Vision Transformer and hybrid checkpoints that were used to generate the data of the third paper "How to train your ViT? ...". The Colab includes code to explore and select checkpoints, and to do inference both using the JAX code from this repo, and also using the popular timm PyTorch library that can directly load these checkpoints as well. Note that a handful of models are also available directly from TF-Hub: sayakpaul/collections/vision_transformer (external contribution by Sayak Paul).

The second Colab also lets you fine-tune the checkpoints on any tfds dataset and your own dataset with examples in individual JPEG files (optionally directly reading from Google Drive).

https://colab.research.google.com/github/google-research/vision_transformer/blob/main/vit_jax_augreg.ipynb

Note: As for now (6/20/21) Google Colab only supports a single GPU (Nvidia Tesla T4), and TPUs (currently TPUv2-8) are attached indirectly to the Colab VM and communicate over slow network, which leads to pretty bad training speed. You would usually want to set up a dedicated machine if you have a non-trivial amount of data to fine-tune on. For details see the Running on cloud section.

Installation

Make sure you have Python>=3.10 installed on your machine.

Install JAX and python dependencies by running:

# If using GPU:
pip install -r vit_jax/requirements.txt

# If using TPU:
pip install -r vit_jax/requirements-tpu.txt

For newer versions of JAX, follow the instructions provided in the corresponding repository linked here. Note that installation instructions for CPU, GPU and TPU differs slightly.

Install Flaxformer, follow the instructions provided in the corresponding repository linked here.

For more details refer to the section Running on cloud below.

Fine-tuning a model

You can run fine-tuning of the downloaded model on your dataset of interest. All models share the same command line interface.

For example for fine-tuning a ViT-B/16 (pre-trained on imagenet21k) on CIFAR10 (note how we specify b16,cifar10 as arguments to the config, and how we instruct the code to access the models directly from a GCS bucket instead of first downloading them into the local directory):

python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \
    --config=$(pwd)/vit_jax/configs/vit.py:b16,cifar10 \
    --config.pretrained_dir='gs://vit_models/imagenet21k'

In order to fine-tune a Mixer-B/16 (pre-trained on imagenet21k) on CIFAR10:

python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \
    --config=$(pwd)/vit_jax/configs/mixer_base16_cifar10.py \
    --config.pretrained_dir='gs://mixer_models/imagenet21k'

The "How to train your ViT? ..." paper added >50k checkpoints that you can fine-tune with the configs/augreg.py config. When you only specify the model name (the config.name value from configs/model.py), then the best i21k checkpoint by upstream validation accuracy ("recommended" checkpoint, see section 4.5 of the paper) is chosen. To make up your mind which model you want to use, have a look at Figure 3 in the paper. It's also possible to choose a different checkpoint (see Colab vit_jax_augreg.ipynb) and then specify the value from the filename or adapt_filename column, which correspond to the filenames without .npz from the gs://vit_models/augreg directory.

python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \
    --config=$(pwd)/vit_jax/configs/augreg.py:R_Ti_16 \
    --config.dataset=oxford_iiit_pet \
    --config.base_lr=0.01

Currently, the code will automatically download CIFAR-10 and CIFAR-100 datasets. Other public or custom datasets can be easily integrated, using tensorflow datasets library. Note that you will also need to update vit_jax/input_pipeline.py to specify some parameters about any added dataset.

Note that our code uses all available GPUs/TPUs for fine-tuning.

To see a detailed list of all available flags, run python3 -m vit_jax.train --help.

Notes on memory:

  • Different models require different amount of memory. Available memory also depends on the accelerator configuration (both type and count). If you encounter an out-of-memory error you can increase the value of --config.accum_steps=8 -- alternatively, you could also decrease the --config.batch=512 (and decrease --config.base_lr accordingly).
  • The host keeps a shuffle buffer in memory. If you encounter a host OOM (as opposed to an accelerator OOM), you can decrease the default --config.shuffle_buffer=50000.

Vision Transformer

by Alexey Dosovitskiy*†, Lucas Beyer*, Alexander Kolesnikov*, Dirk Weissenborn*, Xiaohua Zhai*, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit and Neil Houlsby*†.

(*) equal technical contribution, (†) equal advising.

Figure 1 from paper

Overview of the model: we split an image into fixed-size patches, linearly embed each of them, add position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. In order to perform classification, we use the standard approach of adding an extra learnable "classification token" to the sequence.

Available ViT models

We provide a variety of ViT models in different GCS buckets. The models can be downloaded with e.g.:

wget https://storage.googleapis.com/vit_models/imagenet21k/ViT-B_16.npz

The model filenames (without the .npz extension) correspond to the config.model_name in vit_jax/configs/models.py

We recommend using the following checkpoints, trained with AugReg that have the best pre-training metrics:

Model Pre-trained checkpoint Size Fine-tuned checkpoint Resolution Img/sec Imagenet accuracy
L/16 gs://vit_models/augreg/L_16-i21k-300ep-lr_0.001-aug_strong1-wd_0.1-do_0.0-sd_0.0.npz 1243 MiB gs://vit_models/augreg/L_16-i21k-300ep-lr_0.001-aug_strong1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz 384 50 85.59%
B/16 gs://vit_models/augreg/B_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0.npz 391 MiB gs://vit_models/augreg/B_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz 384 138 85.49%
S/16 gs://vit_models/augreg/S_16-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0.npz 115 MiB gs://vit_models/augreg/S_16-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz 384 300 83.73%
R50+L/32 gs://vit_models/augreg/R50_L_32-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.1-sd_0.1.npz 1337 MiB gs://vit_models/augreg/R50_L_32-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.01-res_384.npz 384 327 85.99%
R26+S/32 gs://vit_models/augreg/R26_S_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0.npz 170 MiB gs://vit_models/augreg/R26_S_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz 384 560 83.85%
Ti/16 gs://vit_models/augreg/Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0.npz 37 MiB gs://vit_models/augreg/Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz 384 610 78.22%
B/32 gs://vit_models/augreg/B_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0.npz 398 MiB gs://vit_models/augreg/B_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz 384 955 83.59%
S/32 gs://vit_models/augreg/S_32-i21k-300ep-lr_0.001-aug_none-wd_0.1-do_0.0-sd_0.0.npz 118 MiB gs://vit_models/augreg/S_32-i21k-300ep-lr_0.001-aug_none-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz 384 2154 79.58%
R+Ti/16 gs://vit_models/augreg/R_Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0.npz 40 MiB gs://vit_models/augreg/R_Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz 384 2426 75.40%

The results from the original ViT paper (https://arxiv.org/abs/2010.11929) have been replicated using the models from gs://vit_models/imagenet21k:

model dataset dropout=0.0 dropout=0.1
R50+ViT-B_16 cifar10 98.72%, 3.9h (A100), tb.dev 98.94%, 10.1h (V100), tb.dev
R50+ViT-B_16 cifar100 90.88%, 4.1h (A100), tb.dev 92.30%, 10.1h (V100), tb.dev
R50+ViT-B_16 imagenet2012 83.72%, 9.9h (A100), tb.dev 85.08%, 24.2h (V100), tb.dev
ViT-B_16 cifar10 99.02%, 2.2h (A100), tb.dev 98.76%, 7.8h (V100), tb.dev
ViT-B_16 cifar100 92.06%, 2.2h (A100), tb.dev 91.92%, 7.8h (V100), tb.dev
ViT-B_16 imagenet2012 84.53%, 6.5h (A100), tb.dev 84.12%, 19.3h (V100), tb.dev
ViT-B_32 cifar10 98.88%, 0.8h (A100), tb.dev 98.75%, 1.8h (V100), tb.dev
ViT-B_32 cifar100 92.31%, 0.8h (A100), tb.dev 92.05%, 1.8h (V100), tb.dev
ViT-B_32 imagenet2012 81.66%, 3.3h (A100), tb.dev 81.31%, 4.9h (V100), tb.dev
ViT-L_16 cifar10 99.13%, 6.9h (A100), tb.dev 99.14%, 24.7h (V100), tb.dev
ViT-L_16 cifar100 92.91%, 7.1h (A100), tb.dev 93.22%, 24.4h (V100), tb.dev
ViT-L_16 imagenet2012 84.47%, 16.8h (A100), tb.dev 85.05%, 59.7h (V100), tb.dev
ViT-L_32 cifar10 99.06%, 1.9h (A100), tb.dev 99.09%, 6.1h (V100), tb.dev
ViT-L_32 cifar100 93.29%, 1.9h (A100), tb.dev 93.34%, 6.2h (V100), tb.dev
ViT-L_32 imagenet2012 81.89%, 7.5h (A100), tb.dev 81.13%, 15.0h (V100), tb.dev

We also would like to emphasize that high-quality results can be achieved with shorter training schedules and encourage users of our code to play with hyper-parameters to trade-off accuracy and computational budget. Some examples for CIFAR-10/100 datasets are presented in the table below.

upstream model dataset total_steps / warmup_steps accuracy wall-clock time link
imagenet21k ViT-B_16 cifar10 500 / 50 98.59% 17m tensorboard.dev
imagenet21k ViT-B_16 cifar10 1000 / 100 98.86% 39m tensorboard.dev
imagenet21k ViT-B_16 cifar100 500 / 50 89.17% 17m tensorboard.dev
imagenet21k ViT-B_16 cifar100 1000 / 100 91.15% 39m tensorboard.dev

MLP-Mixer

by Ilya Tolstikhin*, Neil Houlsby*, Alexander Kolesnikov*, Lucas Beyer*, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, Mario Lucic, Alexey Dosovitskiy.

(*) equal contribution.

Figure 1 from paper

MLP-Mixer (Mixer for short) consists of per-patch linear embeddings, Mixer layers, and a classifier head. Mixer layers contain one token-mixing MLP and one channel-mixing MLP, each consisting of two fully-connected layers and a GELU nonlinearity. Other components include: skip-connections, dropout, and linear classifier head.

For installation follow the same steps as above.

Available Mixer models

We provide the Mixer-B/16 and Mixer-L/16 models pre-trained on the ImageNet and ImageNet-21k datasets. Details can be found in Table 3 of the Mixer paper. All the models can be found at:

https://console.cloud.google.com/storage/mixer_models/

Note that these models are also available directly from TF-Hub: sayakpaul/collections/mlp-mixer (external contribution by Sayak Paul).

Expected Mixer results

We ran the fine-tuning code on Google Cloud machine with four V100 GPUs with the default adaption parameters from this repository. Here are the results:

upstream model dataset accuracy wall_clock_time link
ImageNet Mixer-B/16 cifar10 96.72% 3.0h tensorboard.dev
ImageNet Mixer-L/16 cifar10 96.59% 3.0h tensorboard.dev
ImageNet-21k Mixer-B/16 cifar10 96.82% 9.6h tensorboard.dev
ImageNet-21k Mixer-L/16 cifar10 98.34% 10.0h tensorboard.dev

LiT models

For details, refer to the Google AI blog post LiT: adding language understanding to image models, or read the CVPR paper "LiT: Zero-Shot Transfer with Locked-image text Tuning" (https://arxiv.org/abs/2111.07991).

We published a Transformer B/16-base model with an ImageNet zeroshot accuracy of 72.1%, and a L/16-large model with an ImageNet zeroshot accuracy of 75.7%. For more details about these models, please refer to the LiT model card.

We provide a in-browser demo with small text encoders for interactive use (the smallest models should even run on a modern cell phone):

https://google-research.github.io/vision_transformer/lit/

And finally a Colab to use the JAX models with both image and text encoders:

https://colab.research.google.com/github/google-research/vision_transformer/blob/main/lit.ipynb

Note that none of above models support multi-lingual inputs yet, but we're working on publishing such models and will update this repository once they become available.

This repository only contains evaluation code for LiT models. You can find the training code in the big_vision repository:

https://github.com/google-research/big_vision/tree/main/big_vision/configs/proj/image_text

Expected zeroshot results from model_cards/lit.md (note that the zeroshot evaluation is slightly different from the simplified evaluation in the Colab):

Model B16B_2 L16L
ImageNet zero-shot 73.9% 75.7%
ImageNet v2 zero-shot 65.1% 66.6%
CIFAR100 zero-shot 79.0% 80.5%
Pets37 zero-shot 83.3% 83.3%
Resisc45 zero-shot 25.3% 25.6%
MS-COCO Captions image-to-text retrieval 51.6% 48.5%
MS-COCO Captions text-to-image retrieval 31.8% 31.1%

Running on cloud

While above colabs are pretty useful to get started, you would usually want to train on a larger machine with more powerful accelerators.

Create a VM

You can use the following commands to setup a VM with GPUs on Google Cloud:

# Set variables used by all commands below.
# Note that project must have accounting set up.
# For a list of zones with GPUs refer to
# https://cloud.google.com/compute/docs/gpus/gpu-regions-zones
PROJECT=my-awesome-gcp-project  # Project must have billing enabled.
VM_NAME=vit-jax-vm-gpu
ZONE=europe-west4-b

# Below settings have been tested with this repository. You can choose other
# combinations of images & machines (e.g.), refer to the corresponding gcloud commands:
# gcloud compute images list --project ml-images
# gcloud compute machine-types list
# etc.
gcloud compute instances create $VM_NAME \
    --project=$PROJECT --zone=$ZONE \
    --image=c1-deeplearning-tf-2-5-cu110-v20210527-debian-10 \
    --image-project=ml-images --machine-type=n1-standard-96 \
    --scopes=cloud-platform,storage-full --boot-disk-size=256GB \
    --boot-disk-type=pd-ssd --metadata=install-nvidia-driver=True \
    --maintenance-policy=TERMINATE \
    --accelerator=type=nvidia-tesla-v100,count=8

# Connect to VM (after some minutes needed to setup & start the machine).
gcloud compute ssh --project $PROJECT --zone $ZONE $VM_NAME

# Stop the VM after use (only storage is billed for a stopped VM).
gcloud compute instances stop --project $PROJECT --zone $ZONE $VM_NAME

# Delete VM after use (this will also remove all data stored on VM).
gcloud compute instances delete --project $PROJECT --zone $ZONE $VM_NAME

Alternatively, you can use the following similar commands to set up a Cloud VM with TPUs attached to them (below commands copied from the TPU tutorial):

PROJECT=my-awesome-gcp-project  # Project must have billing enabled.
VM_NAME=vit-jax-vm-tpu
ZONE=europe-west4-a

# Required to set up service identity initially.
gcloud beta services identity create --service tpu.googleapis.com

# Create a VM with TPUs directly attached to it.
gcloud alpha compute tpus tpu-vm create $VM_NAME \
    --project=$PROJECT --zone=$ZONE \
    --accelerator-type v3-8 \
    --version tpu-vm-base

# Connect to VM (after some minutes needed to setup & start the machine).
gcloud alpha compute tpus tpu-vm ssh --project $PROJECT --zone $ZONE $VM_NAME

# Stop the VM after use (only storage is billed for a stopped VM).
gcloud alpha compute tpus tpu-vm stop --project $PROJECT --zone $ZONE $VM_NAME

# Delete VM after use (this will also remove all data stored on VM).
gcloud alpha compute tpus tpu-vm delete --project $PROJECT --zone $ZONE $VM_NAME

Setup VM

And then fetch the repository and the install dependencies (including jaxlib with TPU support) as usual:

git clone --depth=1 --branch=master https://github.com/google-research/vision_transformer
cd vision_transformer

# optional: install virtualenv
pip3 install virtualenv
python3 -m virtualenv env
. env/bin/activate

If you're connected to a VM with GPUs attached, install JAX and other dependencies with the following command:

pip install -r vit_jax/requirements.txt

If you're connected to a VM with TPUs attached, install JAX and other dependencies with the following command:

pip install -r vit_jax/requirements-tpu.txt

Install Flaxformer, follow the instructions provided in the corresponding repository linked here.

For both GPUs and TPUs, Check that JAX can connect to attached accelerators with the command:

python -c 'import jax; print(jax.devices())'

And finally execute one of the commands mentioned in the section fine-tuning a model.

Bibtex

@article{dosovitskiy2020vit,
  title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
  author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and  Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
  journal={ICLR},
  year={2021}
}

@article{tolstikhin2021mixer,
  title={MLP-Mixer: An all-MLP Architecture for Vision},
  author={Tolstikhin, Ilya and Houlsby, Neil and Kolesnikov, Alexander and Beyer, Lucas and Zhai, Xiaohua and Unterthiner, Thomas and Yung, Jessica and Steiner, Andreas and Keysers, Daniel and Uszkoreit, Jakob and Lucic, Mario and Dosovitskiy, Alexey},
  journal={arXiv preprint arXiv:2105.01601},
  year={2021}
}

@article{steiner2021augreg,
  title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers},
  author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas},
  journal={arXiv preprint arXiv:2106.10270},
  year={2021}
}

@article{chen2021outperform,
  title={When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations},
  author={Chen, Xiangning and Hsieh, Cho-Jui and Gong, Boqing},
  journal={arXiv preprint arXiv:2106.01548},
  year={2021},
}

@article{zhuang2022gsam,
  title={Surrogate Gap Minimization Improves Sharpness-Aware Training},
  author={Zhuang, Juntang and Gong, Boqing and Yuan, Liangzhe and Cui, Yin and Adam, Hartwig and Dvornek, Nicha and Tatikonda, Sekhar and Duncan, James and Liu, Ting},
  journal={ICLR},
  year={2022},
}

@article{zhai2022lit,
  title={LiT: Zero-Shot Transfer with Locked-image Text Tuning},
  author={Zhai, Xiaohua and Wang, Xiao and Mustafa, Basil and Steiner, Andreas and Keysers, Daniel and Kolesnikov, Alexander and Beyer, Lucas},
  journal={CVPR},
  year={2022}
}

Changelog

In reverse chronological order:

  • 2022-08-18: Added LiT-B16B_2 model that was trained for 60k steps (LiT_B16B: 30k) without linear head on the image side (LiT_B16B: 768) and has better performance.

  • 2022-06-09: Added the ViT and Mixer models trained from scratch using GSAM on ImageNet without strong data augmentations. The resultant ViTs outperform those of similar sizes trained using AdamW optimizer or the original SAM algorithm, or with strong data augmentations.

  • 2022-04-14: Added models and Colab for LiT models.

  • 2021-07-29: Added ViT-B/8 AugReg models (3 upstream checkpoints and adaptations with resolution=224).

  • 2021-07-02: Added the "When Vision Transformers Outperform ResNets..." paper

  • 2021-07-02: Added SAM (Sharpness-Aware Minimization) optimized ViT and MLP-Mixer checkpoints.

  • 2021-06-20: Added the "How to train your ViT? ..." paper, and a new Colab to explore the >50k pre-trained and fine-tuned checkpoints mentioned in the paper.

  • 2021-06-18: This repository was rewritten to use Flax Linen API and ml_collections.ConfigDict for configuration.

  • 2021-05-19: With publication of the "How to train your ViT? ..." paper, we added more than 50k ViT and hybrid models pre-trained on ImageNet and ImageNet-21k with various degrees of data augmentation and model regularization, and fine-tuned on ImageNet, Pets37, Kitti-distance, CIFAR-100, and Resisc45. Check out vit_jax_augreg.ipynb to navigate this treasure trove of models! For example, you can use that Colab to fetch the filenames of recommended pre-trained and fine-tuned checkpoints from the i21k_300 column of Table 3 in the paper.

  • 2020-12-01: Added the R50+ViT-B/16 hybrid model (ViT-B/16 on top of a Resnet-50 backbone). When pretrained on imagenet21k, this model achieves almost the performance of the L/16 model with less than half the computational finetuning cost. Note that "R50" is somewhat modified for the B/16 variant: The original ResNet-50 has [3,4,6,3] blocks, each reducing the resolution of the image by a factor of two. In combination with the ResNet stem this would result in a reduction of 32x so even with a patch size of (1,1) the ViT-B/16 variant cannot be realized anymore. For this reason we instead use [3,4,9] blocks for the R50+B/16 variant.

  • 2020-11-09: Added the ViT-L/16 model.

  • 2020-10-29: Added ViT-B/16 and ViT-L/16 models pretrained on ImageNet-21k and then fine-tuned on ImageNet at 224x224 resolution (instead of default 384x384). These models have the suffix "-224" in their name. They are expected to achieve 81.2% and 82.7% top-1 accuracies respectively.

Disclaimers

Open source release prepared by Andreas Steiner.

Note: This repository was forked and modified from google-research/big_transfer.

This is not an official Google product.

More Repositories

1

bert

TensorFlow code and pre-trained models for BERT
Python
37,769
star
2

google-research

Google Research
Jupyter Notebook
33,759
star
3

tuning_playbook

A playbook for systematically maximizing the performance of deep learning models.
26,593
star
4

text-to-text-transfer-transformer

Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
Python
6,099
star
5

arxiv-latex-cleaner

arXiv LaTeX Cleaner: Easily clean the LaTeX code of your paper to submit to arXiv
Python
5,233
star
6

simclr

SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
Jupyter Notebook
3,937
star
7

multinerf

A Code Release for Mip-NeRF 360, Ref-NeRF, and RawNeRF
Python
3,612
star
8

timesfm

TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.
Python
3,576
star
9

scenic

Scenic: A Jax Library for Computer Vision Research and Beyond
Python
3,295
star
10

football

Check out the new game server:
Python
3,260
star
11

albert

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Python
3,209
star
12

frame-interpolation

FILM: Frame Interpolation for Large Motion, In ECCV 2022.
Python
2,818
star
13

t5x

Python
2,656
star
14

electra

ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
Python
2,325
star
15

kubric

A data generation pipeline for creating semi-realistic synthetic multi-object videos with rich annotations such as instance segmentation masks, depth maps, and optical flow.
Jupyter Notebook
2,312
star
16

big_vision

Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.
Jupyter Notebook
2,219
star
17

uda

Unsupervised Data Augmentation (UDA)
Python
2,131
star
18

language

Shared repository for open-sourced projects from the Google AI Language team.
Python
1,605
star
19

pegasus

Python
1,600
star
20

dex-lang

Research language for array processing in the Haskell/ML family
Haskell
1,581
star
21

torchsde

Differentiable SDE solvers with GPU support and efficient sensitivity analysis.
Python
1,548
star
22

parti

1,538
star
23

big_transfer

Official repository for the "Big Transfer (BiT): General Visual Representation Learning" paper.
Python
1,504
star
24

FLAN

Python
1,460
star
25

robotics_transformer

Python
1,337
star
26

disentanglement_lib

disentanglement_lib is an open-source library for research on learning disentangled representations.
Python
1,311
star
27

multilingual-t5

Python
1,197
star
28

circuit_training

Python
1,151
star
29

tapas

End-to-end neural table-text understanding models.
Python
1,143
star
30

planet

Learning Latent Dynamics for Planning from Pixels
Python
1,134
star
31

mixmatch

Python
1,130
star
32

deduplicate-text-datasets

Rust
1,104
star
33

fixmatch

A simple method to perform semi-supervised learning with limited data.
Python
1,094
star
34

morph-net

Fast & Simple Resource-Constrained Learning of Deep Network Structure
Python
1,016
star
35

maxim

[CVPR 2022 Oral] Official repository for "MAXIM: Multi-Axis MLP for Image Processing". SOTA for denoising, deblurring, deraining, dehazing, and enhancement.
Python
996
star
36

deeplab2

DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.
Python
995
star
37

batch-ppo

Efficient Batched Reinforcement Learning in TensorFlow
Python
963
star
38

augmix

AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
Python
951
star
39

magvit

Official JAX implementation of MAGVIT: Masked Generative Video Transformer
Python
947
star
40

pix2seq

Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)
Jupyter Notebook
865
star
41

seed_rl

SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference. Implements IMPALA and R2D2 algorithms in TF2 with SEED's architecture.
Python
793
star
42

meta-dataset

A dataset of datasets for learning to learn from few examples
Jupyter Notebook
762
star
43

noisystudent

Code for Noisy Student Training. https://arxiv.org/abs/1911.04252
Python
751
star
44

rliable

[NeurIPS'21 Outstanding Paper] Library for reliable evaluation on RL and ML benchmarks, even with only a handful of seeds.
Jupyter Notebook
747
star
45

recsim

A Configurable Recommender Systems Simulation Platform
Python
739
star
46

jax3d

Python
733
star
47

long-range-arena

Long Range Arena for Benchmarking Efficient Transformers
Python
719
star
48

lottery-ticket-hypothesis

A reimplementation of "The Lottery Ticket Hypothesis" (Frankle and Carbin) on MNIST.
Python
706
star
49

federated

A collection of Google research projects related to Federated Learning and Federated Analytics.
Python
675
star
50

bleurt

BLEURT is a metric for Natural Language Generation based on transfer learning.
Python
651
star
51

prompt-tuning

Original Implementation of Prompt Tuning from Lester, et al, 2021
Python
642
star
52

nasbench

NASBench: A Neural Architecture Search Dataset and Benchmark
Python
641
star
53

neuralgcm

Hybrid ML + physics model of the Earth's atmosphere
Python
641
star
54

xtreme

XTREME is a benchmark for the evaluation of the cross-lingual generalization ability of pre-trained multilingual models that covers 40 typologically diverse languages and includes nine tasks.
Python
631
star
55

lasertagger

Python
606
star
56

sound-separation

Python
603
star
57

pix2struct

Python
587
star
58

vmoe

Jupyter Notebook
569
star
59

dreamer

Dream to Control: Learning Behaviors by Latent Imagination
Python
568
star
60

robopianist

[CoRL '23] Dexterous piano playing with deep reinforcement learning.
Python
562
star
61

omniglue

Code release for CVPR'24 submission 'OmniGlue'
Python
561
star
62

fast-soft-sort

Fast Differentiable Sorting and Ranking
Python
561
star
63

ravens

Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet. Transporter Nets, CoRL 2020.
Python
560
star
64

sam

Python
551
star
65

batch_rl

Offline Reinforcement Learning (aka Batch Reinforcement Learning) on Atari 2600 games
Python
521
star
66

bigbird

Transformers for Longer Sequences
Python
518
star
67

tensor2robot

Distributed machine learning infrastructure for large-scale robotics research
Python
483
star
68

byt5

Python
477
star
69

adapter-bert

Python
476
star
70

mint

Multi-modal Content Creation Model Training Infrastructure including the FACT model (AI Choreographer) implementation.
Python
465
star
71

leaf-audio

LEAF is a learnable alternative to audio features such as mel-filterbanks, that can be initialized as an approximation of mel-filterbanks, and then be trained for the task at hand, while using a very small number of parameters.
Python
446
star
72

robustness_metrics

Jupyter Notebook
442
star
73

maxvit

[ECCV 2022] Official repository for "MaxViT: Multi-Axis Vision Transformer". SOTA foundation models for classification, detection, segmentation, image quality, and generative modeling...
Jupyter Notebook
436
star
74

receptive_field

Compute receptive fields of your favorite convnets
Python
434
star
75

maskgit

Official Jax Implementation of MaskGIT
Jupyter Notebook
429
star
76

weatherbench2

A benchmark for the next generation of data-driven global weather models.
Python
420
star
77

l2p

Learning to Prompt (L2P) for Continual Learning @ CVPR22 and DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning @ ECCV22
Python
408
star
78

distilling-step-by-step

Python
407
star
79

ssl_detection

Semi-supervised learning for object detection
Python
398
star
80

nerf-from-image

Shape, Pose, and Appearance from a Single Image via Bootstrapped Radiance Field Inversion
Python
377
star
81

computation-thru-dynamics

Understanding computation in artificial and biological recurrent networks through the lens of dynamical systems.
Jupyter Notebook
369
star
82

tf-slim

Python
368
star
83

realworldrl_suite

Real-World RL Benchmark Suite
Python
341
star
84

python-graphs

A static analysis library for computing graph representations of Python programs suitable for use with graph neural networks.
Python
325
star
85

rigl

End-to-end training of sparse deep neural networks with little-to-no performance loss.
Python
314
star
86

task_adaptation

Python
310
star
87

self-organising-systems

Jupyter Notebook
308
star
88

ibc

Official implementation of Implicit Behavioral Cloning, as described in our CoRL 2021 paper, see more at https://implicitbc.github.io/
Python
306
star
89

tensorflow_constrained_optimization

Python
300
star
90

syn-rep-learn

Learning from synthetic data - code and models
Python
294
star
91

arco-era5

Recipes for reproducing Analysis-Ready & Cloud Optimized (ARCO) ERA5 datasets.
Python
291
star
92

vdm

Jupyter Notebook
291
star
93

rlds

Jupyter Notebook
284
star
94

exoplanet-ml

Machine learning models and utilities for exoplanet science.
Python
283
star
95

retvec

RETVec is an efficient, multilingual, and adversarially-robust text vectorizer.
Jupyter Notebook
281
star
96

sparf

This is the official code release for SPARF: Neural Radiance Fields from Sparse and Noisy Poses [CVPR 2023-Highlight]
Python
279
star
97

tensorflow-coder

Python
275
star
98

lm-extraction-benchmark

Python
270
star
99

language-table

Suite of human-collected datasets and a multi-task continuous control benchmark for open vocabulary visuolinguomotor learning.
Jupyter Notebook
260
star
100

falken

Falken provides developers with a service that allows them to train AI that can play their games
Python
254
star