• Stars
    star
    826
  • Rank 55,195 (Top 2 %)
  • Language
    Jupyter Notebook
  • License
    MIT License
  • Created over 3 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[NeurIPS 2021] You Only Look at One Sequence

You Only πŸ‘€ One Sequence

TL;DR: We study the transferability of the vanilla ViT pre-trained on mid-sized ImageNet-1k to the more challenging COCO object detection benchmark.

πŸ‘¨β€πŸ’» This project is under active development πŸ‘©β€πŸ’» :

  • May 4, 2022: πŸ‘€YOLOS is now available in πŸ€—HuggingFace Transformers!

  • Apr 8, 2022: If you like YOLOS, you might also like MIMDet (paper / code & models)! MIMDet can efficiently and effectively adapt a masked image modeling (MIM) pre-trained vanilla Vision Transformer (ViT) for high-performance object detection (51.5 box AP and 46.0 mask AP on COCO using ViT-Base & Mask R-CNN).

  • Oct 28, 2021: YOLOS receives an update for the NeurIPS 2021 camera-ready version. We add MoCo-v3 self-supervised pre-traineing results, study the impacts of detaching [Det] tokens, as well as add a new Discussion Section.

  • Sep 29, 2021: YOLOS is accepted to NeurIPS 2021!

  • Jun 22, 2021: We update our manuscript on arXiv including discussion about position embeddings and more visualizations, check it out!

  • Jun 9, 2021: We add a notebook to to visualize self-attention maps of [Det] tokens on different heads of the last layer, check it out!

You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection

by Yuxin Fang1 *, Bencheng Liao1 *, Xinggang Wang1 πŸ“§, Jiemin Fang2, 1, Jiyang Qi1, Rui Wu3, Jianwei Niu3, Wenyu Liu1.

1 School of EIC, HUST, 2 Institute of AI, HUST, 3 Horizon Robotics.

(*) equal contribution, (πŸ“§) corresponding author.

arXiv technical report (arXiv 2106.00666)


You Only Look at One Sequence (YOLOS)

The Illustration of YOLOS

yolos

Highlights

Directly inherited from ViT (DeiT), YOLOS is not designed to be yet another high-performance object detector, but to unveil the versatility and transferability of Transformer from image recognition to object detection. Concretely, our main contributions are summarized as follows:

  • We use the mid-sized ImageNet-1k as the sole pre-training dataset, and show that a vanilla ViT (DeiT) can be successfully transferred to perform the challenging object detection task and produce competitive COCO results with the fewest possible modifications, i.e., by only looking at one sequence (YOLOS).

  • We demonstrate that 2D object detection can be accomplished in a pure sequence-to-sequence manner by taking a sequence of fixed-sized non-overlapping image patches as input. Among existing object detectors, YOLOS utilizes minimal 2D inductive biases. Moreover, it is feasible for YOLOS to perform object detection in any dimensional space unaware the exact spatial structure or geometry.

  • For ViT (DeiT), we find the object detection results are quite sensitive to the pre-train scheme and the detection performance is far from saturating. Therefore the proposed YOLOS can be used as a challenging benchmark task to evaluate different pre-training strategies for ViT (DeiT).

  • We also discuss the impacts as wel as the limitations of prevalent pre-train schemes and model scaling strategies for Transformer in vision through transferring to object detection.

Results

Model Pre-train Epochs ViT (DeiT) Weight / Log Fine-tune Epochs Eval Size YOLOS Checkpoint / Log AP @ COCO val
YOLOS-Ti 300 FB 300 512 Baidu Drive, Google Drive / Log 28.7
YOLOS-S 200 Baidu Drive, Google Drive / Log 150 800 Baidu Drive, Google Drive / Log 36.1
YOLOS-S 300 FB 150 800 Baidu Drive, Google Drive / Log 36.1
YOLOS-S (dWr) 300 Baidu Drive, Google Drive / Log 150 800 Baidu Drive, Google Drive / Log 37.6
YOLOS-B 1000 FB 150 800 Baidu Drive, Google Drive / Log 42.0

Notes:

  • The access code for Baidu Drive is yolo.
  • The FB stands for model weights provided by DeiT (paper, code). Thanks for their wonderful works.
  • We will update other models in the future, please stay tuned :)

Requirement

This codebase has been developed with python version 3.6, PyTorch 1.5+ and torchvision 0.6+:

conda install -c pytorch pytorch torchvision

Install pycocotools (for evaluation on COCO) and scipy (for training):

conda install cython scipy
pip install -U 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'

Data preparation

Download and extract COCO 2017 train and val images with annotations from http://cocodataset.org. We expect the directory structure to be the following:

path/to/coco/
  annotations/  # annotation json files
  train2017/    # train images
  val2017/      # val images

Training

Before finetuning on COCO, you need download the ImageNet pretrained model to the /path/to/YOLOS/ directory

To train the YOLOS-Ti model in the paper, run this command:

python -m torch.distributed.launch \
    --nproc_per_node=8 \
    --use_env main.py \
    --coco_path /path/to/coco
    --batch_size 2 \
    --lr 5e-5 \
    --epochs 300 \
    --backbone_name tiny \
    --pre_trained /path/to/deit-tiny.pth\
    --eval_size 512 \
    --init_pe_size 800 1333 \
    --output_dir /output/path/box_model
To train the YOLOS-S model with 200 epoch pretrained Deit-S in the paper, run this command:

python -m torch.distributed.launch
--nproc_per_node=8
--use_env main.py
--coco_path /path/to/coco --batch_size 1
--lr 2.5e-5
--epochs 150
--backbone_name small
--pre_trained /path/to/deit-small-200epoch.pth
--eval_size 800
--init_pe_size 512 864
--mid_pe_size 512 864
--output_dir /output/path/box_model

To train the YOLOS-S model with 300 epoch pretrained Deit-S in the paper, run this command:

python -m torch.distributed.launch \
    --nproc_per_node=8 \
    --use_env main.py \
    --coco_path /path/to/coco
    --batch_size 1 \
    --lr 2.5e-5 \
    --epochs 150 \
    --backbone_name small \
    --pre_trained /path/to/deit-small-300epoch.pth\
    --eval_size 800 \
    --init_pe_size 512 864 \
    --mid_pe_size 512 864 \
    --output_dir /output/path/box_model

To train the YOLOS-S (dWr) model in the paper, run this command:

python -m torch.distributed.launch \
    --nproc_per_node=8 \
    --use_env main.py \
    --coco_path /path/to/coco
    --batch_size 1 \
    --lr 2.5e-5 \
    --epochs 150 \
    --backbone_name small_dWr \
    --pre_trained /path/to/deit-small-dWr-scale.pth\
    --eval_size 800 \
    --init_pe_size 512 864 \
    --mid_pe_size 512 864 \
    --output_dir /output/path/box_model
To train the YOLOS-B model in the paper, run this command:

python -m torch.distributed.launch \
    --nproc_per_node=8 \
    --use_env main.py \
    --coco_path /path/to/coco
    --batch_size 1 \
    --lr 2.5e-5 \
    --epochs 150 \
    --backbone_name base \
    --pre_trained /path/to/deit-base.pth\
    --eval_size 800 \
    --init_pe_size 800 1344 \
    --mid_pe_size 800 1344 \
    --output_dir /output/path/box_model

Evaluation

To evaluate YOLOS-Ti model on COCO, run:

python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --coco_path /path/to/coco --batch_size 2 --backbone_name tiny --eval --eval_size 512 --init_pe_size 800 1333 --resume /path/to/YOLOS-Ti

To evaluate YOLOS-S model on COCO, run:

python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --coco_path /path/to/coco --batch_size 1 --backbone_name small --eval --eval_size 800 --init_pe_size 512 864 --mid_pe_size 512 864 --resume /path/to/YOLOS-S

To evaluate YOLOS-S (dWr) model on COCO, run:

python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --coco_path /path/to/coco --batch_size 1 --backbone_name small_dWr --eval --eval_size 800 --init_pe_size 512 864 --mid_pe_size 512 864 --resume /path/to/YOLOS-S(dWr)

To evaluate YOLOS-B model on COCO, run:

python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --coco_path /path/to/coco --batch_size 1 --backbone_name base --eval --eval_size 800 --init_pe_size 800 1344 --mid_pe_size 800 1344 --resume /path/to/YOLOS-B

Visualization

  • Visualize box prediction and object categories distribution:
  1. To Get visualization in the paper, you need the finetuned YOLOS models on COCO, run following command to get 100 Det-Toks prediction on COCO val split, then it will generate /path/to/YOLOS/visualization/modelname-eval-800-eval-pred.json
python cocoval_predjson_generation.py --coco_path /path/to/coco --batch_size 1 --backbone_name small --eval --eval_size 800 --init_pe_size 512 864 --mid_pe_size 512 864 --resume /path/to/yolos-s-model.pth --output_dir ./visualization
  1. To get all ground truth object categories on all images from COCO val split, run following command to generate /path/to/YOLOS/visualization/coco-valsplit-cls-dist.json
python cocoval_gtclsjson_generation.py --coco_path /path/to/coco --batch_size 1 --output_dir ./visualization
  1. To visualize the distribution of Det-Toks' bboxs and categories, run following command to generate .png files in /path/to/YOLOS/visualization/
 python visualize_dettoken_dist.py --visjson /path/to/YOLOS/visualization/modelname-eval-800-eval-pred.json --cococlsjson /path/to/YOLOS/visualization/coco-valsplit-cls-dist.json

cls cls

Det-Tok-41 Det-Tok-96

Acknowledgement ❀️

This project is based on DETR (paper, code), DeiT (paper, code), DINO (paper, code) and timm. Thanks for their wonderful works.

Citation

If you find our paper and code useful in your research, please consider giving a star ⭐ and citation πŸ“ :

@article{YOLOS,
  title={You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection},
  author={Fang, Yuxin and Liao, Bencheng and Wang, Xinggang and Fang, Jiemin and Qi, Jiyang and Wu, Rui and Niu, Jianwei and Liu, Wenyu},
  journal={arXiv preprint arXiv:2106.00666},
  year={2021}
}

More Repositories

1

Vim

[ICML 2024] Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model
Python
2,734
star
2

4DGaussians

[CVPR 2024] 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering
Jupyter Notebook
2,115
star
3

YOLOP

You Only Look Once for Panopitic Driving Perception.(MIR2022οΌ‰
Python
1,906
star
4

MapTR

[ICLR'23 Spotlight] MapTR: Structured Modeling and Learning for Online Vectorized HD Map Construction
Python
1,034
star
5

GaussianDreamer

GaussianDreamer: Fast Generation from Text to 3D Gaussians by Bridging 2D and 3D Diffusion Models (CVPR 2024)
Python
632
star
6

VAD

[ICCV 2023] VAD: Vectorized Scene Representation for Efficient Autonomous Driving
Python
628
star
7

SparseInst

[CVPR 2022] SparseInst: Sparse Instance Activation for Real-Time Instance Segmentation
Python
558
star
8

Matte-Anything

[Image and Vision Computing (Vol.147 Jul. '24)] Interactive Natural Image Matting with Segment Anything Models
Python
473
star
9

QueryInst

[ICCV 2021] Instances as Queries
Python
402
star
10

TopFormer

TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation, CVPR2022
Python
375
star
11

MIMDet

[ICCV 2023] You Only Look at One Partial Sequence
Python
336
star
12

TiNeuVox

TiNeuVox: Fast Dynamic Radiance Fields with Time-Aware Neural Voxels (SIGGRAPH Asia 2022)
Python
322
star
13

ViTMatte

[Information Fusion] Boosting Image Matting with Pretrained Plain Vision Transformers
Python
245
star
14

TeViT

Temporally Efficient Vision Transformer for Video Instance Segmentation, CVPR 2022, Oral
Python
237
star
15

GKT

Efficient and Robust 2D-to-BEV Representation Learning via Geometry-guided Kernel Transformer
Python
218
star
16

BMaskR-CNN

[ECCV 2020] Boundary-preserving Mask R-CNN
Python
184
star
17

HAIS

Hierarchical Aggregation for 3D Instance Segmentation (ICCV 2021)
Python
163
star
18

Symphonies

[CVPR 2024] Symphonies (Scene-from-Insts): Symphonize 3D Semantic Scene Completion with Contextual Instance Queries
Python
160
star
19

VMA

A general map auto annotation framework based on MapTR, with high flexibility in terms of spatial scale and element type
Python
157
star
20

WeakTr

WeakTr: Exploring Plain Vision Transformer for Weakly-supervised Semantic Segmentation
Python
122
star
21

LaneGAP

[ECCV 2024] Lane Graph as Path: Continuity-preserving Path-wise Modeling for Online Lane Graph Construction
114
star
22

SparseTrack

Official PyTorch implementation of SparseTrack (the new version of code will come soon)
Python
108
star
23

CrossVIS

[ICCV 2021] Crossover Learning for Fast Online Video Instance Segmentation
Python
85
star
24

MSG-Transformer

MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens (CVPR 2022)
Python
80
star
25

PolarDETR

73
star
26

BoxTeacher

[CVPR 2023] Exploring High-Quality Pseudo Masks for Weakly Supervised Instance Segmentation
Python
72
star
27

TinyDet

Python
68
star
28

osp

[ECCV 2024] Occupancy as Set of Points
Python
63
star
29

GNeuVox

GNeuVox: Generalizable Neural Voxels for Fast Human Radiance Fields
Python
60
star
30

AziNorm

AziNorm: Exploiting the Radial Symmetry of Point Cloud for Azimuth-Normalized 3D Perception, CVPR 2022.
Python
53
star
31

Featurized-QueryRCNN

Featurized Query R-CNN
Python
46
star
32

RILS

[CVPR 2023] RILS: Masked Visual Reconstruction in Language Semantic Space (https://arxiv.org/abs/2301.06958)
Python
43
star
33

PD-Quant

[CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric
Python
39
star
34

MIM4D

MIM4D: Masked Modeling with Multi-View Video for Autonomous Driving Representation Learning
36
star
35

NeuSample

Code of "NeuSample: Neural Sample Field for Efficient View Synthesis"
Python
36
star
36

SAUNet

A Simple Adaptive Unfolding Network for Hyperspectral Image Reconstruction
Python
29
star
37

Query6DoF

Query6DoF: Learning Sparse Queries as Implicit Shape Prior for Category-Level 6DoF Pose Estimation
Python
25
star
38

HDR-HexPlane

3DV 2024: Fast High Dynamic Range Radiance Fields for Dynamic Scenes
Python
25
star
39

WeakSAM

WeakSAM: Segment Anything Meets Weakly-supervised Instance-level Recognition
Python
24
star
40

ViTGaze

Python
23
star
41

CircuitFormer

[NeurIPS 2023] CircuitFormer: Circuit as Set of Points
Python
23
star
42

EfficientPose

Cuda
20
star
43

MMIL-Transformer

Python
20
star
44

LSFA

Real-Time and Accurate Object Detection in Compressed Video by Long Short-term Feature Aggregation
Python
19
star
45

OpenInst

Python
14
star
46

BoxCaseg

Jupyter Notebook
14
star
47

mancs

Mancs: A multi-task attentional network with curriculum sampling for person re-identification
Python
12
star
48

RND-SCI

A Range-Null Space Decomposition Approach for Fast and Flexible Spectral Compressive Imaging
Python
10
star
49

DGCN

Python
9
star
50

PySA

Pyramid Self-Attention for Semantic Segmentation
8
star
51

EM-OLN

Python
7
star
52

BCF

Xinggang Wang, Bin Feng, Xiang Bai, Wenyu Liu, and Longin Jan Latecki. Bag of Contour Fragments for Robust Shape Classification. Pattern Recognition, Volume 47, Issue 6, June 2014, Pages 2116-2125.
MATLAB
6
star
53

DiG

Python
3
star
54

TOGS

The official code of "TOGS: Gaussian Splatting with Temporal Opacity Offset for Real-Time 4D DSA Rendering"
Python
2
star
55

tbcl

1
star
56

DeepTunel

Python
1
star