• Stars
    star
    1,237
  • Rank 37,964 (Top 0.8 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 2 years ago
  • Updated 8 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions

ViT-Adapter

PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC

The official implementation of the paper "Vision Transformer Adapter for Dense Predictions".

Paper | Blog in Chinese | Slides | Poster | Video in English | Video in Chinese

Segmentation Colab Notebook | Detection Colab Notebook (thanks @IamShubhamGupto, @dudifrid)

News

  • 2023/07/10: 🚀🚀 Support the weights of DINOv2 for object detection, see here!
  • 2023/06/26: 🚀 ViT-Adapter is adopted by the champion solution NVOCC in Track 3 (3D Occupancy Prediction) of the CVPR 2023 Autonomous Driving Challenge.
  • 2023/06/07: ViT-Adapter is used by ONE-PEACE and they created new SOTA of 63.0 mIoU on ADE20K.
  • 2023/04/14: ViT-Adapter is used in EVA and DINOv2!
  • 2023/01/21: Our paper is accepted by ICLR 2023!
  • 2023/01/17: We win the champion of WSDM Cup 2023 Toloka VQA Challenge using ViT-Adapter.
  • 2022/10/20: ViT-Adapter is adopted by Zhang et al. and they ranked 1st in the UVO Challenge 2022.
  • 2022/08/22: ViT-Adapter is adopted by BEiT-3 and created new SOTA of 62.8 mIoU on ADE20K.
  • 2022/06/09: ViT-Adapter-L achieves 60.4 box AP and 52.5 mask AP on COCO test-dev without Objects365.
  • 2022/06/04: Code and models are released.
  • 2022/05/12: ViT-Adapter-L reaches 85.2 mIoU on Cityscapes test set without coarse data.
  • 2022/05/05: ViT-Adapter-L achieves the SOTA on ADE20K val set with 60.5 mIoU!

Highlights

  • ViT-Adapter supports various dense prediction tasks, including object detection, instance segmentation, semantic segmentation, visual grounding, panoptic segmentation, etc.
  • This codebase includes many SOTA detectors and segmenters to achieve top performance, such as HTC++, Mask2Former, DINO.
results.mp4

Abstract

This work investigates a simple yet powerful dense prediction task adapter for Vision Transformer (ViT). Unlike recently advanced variants that incorporate vision-specific inductive biases into their architectures, the plain ViT suffers inferior performance on dense predictions due to weak prior assumptions. To address this issue, we propose the ViT-Adapter, which allows plain ViT to achieve comparable performance to vision-specific transformers. Specifically, the backbone in our framework is a plain ViT that can learn powerful representations from large-scale multi-modal data. When transferring to downstream tasks, a pre-training-free adapter is used to introduce the image-related inductive biases into the model, making it suitable for these tasks. We verify ViT-Adapter on multiple dense prediction tasks, including object detection, instance segmentation, and semantic segmentation. Notably, without using extra detection data, our ViT-Adapter-L yields state-of-the-art 60.9 box AP and 53.0 mask AP on COCO test-dev. We hope that the ViT-Adapter could serve as an alternative for vision-specific transformers and facilitate future research. The code and models will be released.

Method

image

image

Catalog

  • Support flash attention
  • Support faster deformable attention
  • Segmentation checkpoints
  • Segmentation code
  • Detection checkpoints
  • Detection code
  • Initialization

Citation

If this work is helpful for your research, please consider citing the following BibTeX entry.

@article{chen2022vitadapter,
  title={Vision Transformer Adapter for Dense Predictions},
  author={Chen, Zhe and Duan, Yuchen and Wang, Wenhai and He, Junjun and Lu, Tong and Dai, Jifeng and Qiao, Yu},
  journal={arXiv preprint arXiv:2205.08534},
  year={2022}
}

License

This repository is released under the Apache 2.0 license as found in the LICENSE file.