• Stars
    star
    336
  • Rank 125,564 (Top 3 %)
  • Language
    Python
  • License
    MIT License
  • Created over 2 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[ICCV 2023] You Only Look at One Partial Sequence

MIMDet 🎭

Unleashing Vanilla Vision Transformer with Masked Image Modeling for Object Detection

Yuxin Fang1 *, Shusheng Yang1 *, Shijie Wang1 *, Yixiao Ge2, Ying Shan2, Xinggang Wang1 📧,

1 School of EIC, HUST, 2 ARC Lab, Tencent PCG.

(*) equal contribution, (📧) corresponding author.

ArXiv Preprint (arXiv 2204.02964)

News

  • 19 May, 2022: We update our preprint with stronger results and more analysis. Code & models are also updated in the main branch. For our previous results (code & models), please refer to the v1.0.0 branch.

  • 6 Apr, 2022: Code & models are released!

Introduction

This repo provides code and pretrained models for MIMDet (Masked Image Modeling for Detection).

  • MIMDet is a simple framekwork that enables a MIM pretrained vanilla ViT to perform high-performance object-level understanding, e.g, object detection and instance segmentation.
  • In MIMDet, a MIM pre-trained vanilla ViT encoder can work surprisingly well in the challenging object-level recognition scenario even with randomly sampled partial observations, e.g., only 25%~50% of the input embeddings.
  • In order to construct multi-scale representations for object detection, a randomly initialized compact convolutional stem supplants the pre-trained large kernel patchify stem, and its intermediate features can naturally serve as the higher resolution inputs of a feature pyramid without upsampling. While the pre-trained ViT is only regarded as the third-stage of our detector's backbone instead of the whole feature extractor, resulting in a ConvNet-ViT hybrid architecture.
  • MIMDet w/ ViT-Base & Mask R-CNN FPN obtains 51.7 box AP and 46.2 mask AP on COCO. With ViT-L, MIMDet achieves 54.3 box AP and 48.2 mask AP.
  • We also provide an unofficial implementation of Benchmarking Detection Transfer Learning with Vision Transformers that successfully reproduces its reported results.

Models and Main Results

Mask R-CNN

Model Sample Ratio Schedule Aug Box AP Mask AP #params config model / log
MIMDet-ViT-B 0.5 3x [480-800, 1333] w/crop 51.7 46.2 127.96M config model / log
MIMDet-ViT-L 0.5 3x [480-800, 1333] w/crop 54.3 48.2 349.33M config model / log
Benchmarking-ViT-B - 25ep [1024, 1024] LSJ(0.1-2) 48.0 43.0 118.67M config model / log
Benchmarking-ViT-B - 50ep [1024, 1024] LSJ(0.1-2) 50.2 44.9 118.67M config model / log
Benchmarking-ViT-B - 100ep [1024, 1024] LSJ(0.1-2) 50.4 44.9 118.67M config model / log

Notes:

  • The Box AP & Mask AP in the table above is obtained w/ sample ratio = 1.0, which is higher than the training sample ratio (0.25 or 0.5). Our MIMDet can benefit from lower sample ratio during training for better efficiency, as well as higher sample ratio during inference for better accuracy. Please refer to our paper for detailed analysis.
  • Benchmarking-ViT-B is an unofficial implementation of Benchmarking Detection Transfer Learning with Vision Transformers.

Installation

Prerequisites

  • Linux
  • Python 3.7+
  • CUDA 10.2+
  • GCC 5+

Prepare

  • Clone
git clone https://github.com/hustvl/MIMDet.git
cd MIMDet
  • Create a conda virtual environment and activate it:
conda create -n mimdet python=3.9
conda activate mimdet

Dataset

MIMDet is built upon detectron2, so please organize dataset directory in detectron2's manner. We refer users to detectron2 for detailed instructions. The overall hierachical structure is illustrated as following:

MIMDet
├── datasets
│   ├── coco
│   │   ├── annotations
│   │   ├── train2017
│   │   ├── val2017
│   │   ├── test2017
│   ├── ...
├── ...

Training

Download the full MAE pretrained (including the decoder) ViT-B Model and ViT-L Model checkpoint. See MAE repo-issues-8.

# single-machine training
python lazyconfig_train_net.py --config-file <CONFIG_FILE> --num-gpus <GPU_NUM> mae_checkpoint.path=<MAE_MODEL_PATH>

# multi-machine training
python lazyconfig_train_net.py --config-file <CONFIG_FILE> --num-gpus <GPU_NUM> --num-machines <MACHINE_NUM> --master_addr <MASTER_ADDR> --master_port <MASTER_PORT> mae_checkpoint.path=<MAE_MODEL_PATH>

Inference

# inference
python lazyconfig_train_net.py --config-file <CONFIG_FILE> --num-gpus <GPU_NUM> --eval-only train.init_checkpoint=<MODEL_PATH>

# inference with 100% sample ratio (please refer to our paper for detailed analysis)
python lazyconfig_train_net.py --config-file <CONFIG_FILE> --num-gpus <GPU_NUM> --eval-only train.init_checkpoint=<MODEL_PATH> model.backbone.bottom_up.sample_ratio=1.0

Acknowledgement

This project is based on MAE, Detectron2 and timm. Thanks for their wonderful works.

License

MIMDet is released under the MIT License.

Citation

If you find our paper and code useful in your research, please consider giving a star and citation 📝 :)

@article{MIMDet,
  title={Unleashing Vanilla Vision Transformer with Masked Image Modeling for Object Detection},
  author={Fang, Yuxin and Yang, Shusheng and Wang, Shijie and Ge, Yixiao and Shan, Ying and Wang, Xinggang},
  journal={arXiv preprint arXiv:2204.02964},
  year={2022}
}

More Repositories

1

Vim

[ICML 2024] Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model
Python
2,734
star
2

4DGaussians

[CVPR 2024] 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering
Jupyter Notebook
2,115
star
3

YOLOP

You Only Look Once for Panopitic Driving Perception.(MIR2022)
Python
1,906
star
4

MapTR

[ICLR'23 Spotlight] MapTR: Structured Modeling and Learning for Online Vectorized HD Map Construction
Python
1,034
star
5

YOLOS

[NeurIPS 2021] You Only Look at One Sequence
Jupyter Notebook
826
star
6

GaussianDreamer

GaussianDreamer: Fast Generation from Text to 3D Gaussians by Bridging 2D and 3D Diffusion Models (CVPR 2024)
Python
632
star
7

VAD

[ICCV 2023] VAD: Vectorized Scene Representation for Efficient Autonomous Driving
Python
628
star
8

SparseInst

[CVPR 2022] SparseInst: Sparse Instance Activation for Real-Time Instance Segmentation
Python
558
star
9

Matte-Anything

[Image and Vision Computing (Vol.147 Jul. '24)] Interactive Natural Image Matting with Segment Anything Models
Python
473
star
10

QueryInst

[ICCV 2021] Instances as Queries
Python
402
star
11

TopFormer

TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation, CVPR2022
Python
375
star
12

TiNeuVox

TiNeuVox: Fast Dynamic Radiance Fields with Time-Aware Neural Voxels (SIGGRAPH Asia 2022)
Python
322
star
13

ViTMatte

[Information Fusion] Boosting Image Matting with Pretrained Plain Vision Transformers
Python
245
star
14

TeViT

Temporally Efficient Vision Transformer for Video Instance Segmentation, CVPR 2022, Oral
Python
237
star
15

GKT

Efficient and Robust 2D-to-BEV Representation Learning via Geometry-guided Kernel Transformer
Python
218
star
16

BMaskR-CNN

[ECCV 2020] Boundary-preserving Mask R-CNN
Python
184
star
17

HAIS

Hierarchical Aggregation for 3D Instance Segmentation (ICCV 2021)
Python
163
star
18

Symphonies

[CVPR 2024] Symphonies (Scene-from-Insts): Symphonize 3D Semantic Scene Completion with Contextual Instance Queries
Python
160
star
19

VMA

A general map auto annotation framework based on MapTR, with high flexibility in terms of spatial scale and element type
Python
157
star
20

WeakTr

WeakTr: Exploring Plain Vision Transformer for Weakly-supervised Semantic Segmentation
Python
122
star
21

LaneGAP

[ECCV 2024] Lane Graph as Path: Continuity-preserving Path-wise Modeling for Online Lane Graph Construction
114
star
22

SparseTrack

Official PyTorch implementation of SparseTrack (the new version of code will come soon)
Python
108
star
23

CrossVIS

[ICCV 2021] Crossover Learning for Fast Online Video Instance Segmentation
Python
85
star
24

MSG-Transformer

MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens (CVPR 2022)
Python
80
star
25

PolarDETR

73
star
26

BoxTeacher

[CVPR 2023] Exploring High-Quality Pseudo Masks for Weakly Supervised Instance Segmentation
Python
72
star
27

TinyDet

Python
68
star
28

osp

[ECCV 2024] Occupancy as Set of Points
Python
63
star
29

GNeuVox

GNeuVox: Generalizable Neural Voxels for Fast Human Radiance Fields
Python
60
star
30

AziNorm

AziNorm: Exploiting the Radial Symmetry of Point Cloud for Azimuth-Normalized 3D Perception, CVPR 2022.
Python
53
star
31

Featurized-QueryRCNN

Featurized Query R-CNN
Python
46
star
32

RILS

[CVPR 2023] RILS: Masked Visual Reconstruction in Language Semantic Space (https://arxiv.org/abs/2301.06958)
Python
43
star
33

PD-Quant

[CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric
Python
39
star
34

MIM4D

MIM4D: Masked Modeling with Multi-View Video for Autonomous Driving Representation Learning
36
star
35

NeuSample

Code of "NeuSample: Neural Sample Field for Efficient View Synthesis"
Python
36
star
36

SAUNet

A Simple Adaptive Unfolding Network for Hyperspectral Image Reconstruction
Python
29
star
37

Query6DoF

Query6DoF: Learning Sparse Queries as Implicit Shape Prior for Category-Level 6DoF Pose Estimation
Python
25
star
38

HDR-HexPlane

3DV 2024: Fast High Dynamic Range Radiance Fields for Dynamic Scenes
Python
25
star
39

WeakSAM

WeakSAM: Segment Anything Meets Weakly-supervised Instance-level Recognition
Python
24
star
40

ViTGaze

Python
23
star
41

CircuitFormer

[NeurIPS 2023] CircuitFormer: Circuit as Set of Points
Python
23
star
42

EfficientPose

Cuda
20
star
43

MMIL-Transformer

Python
20
star
44

LSFA

Real-Time and Accurate Object Detection in Compressed Video by Long Short-term Feature Aggregation
Python
19
star
45

OpenInst

Python
14
star
46

BoxCaseg

Jupyter Notebook
14
star
47

mancs

Mancs: A multi-task attentional network with curriculum sampling for person re-identification
Python
12
star
48

RND-SCI

A Range-Null Space Decomposition Approach for Fast and Flexible Spectral Compressive Imaging
Python
10
star
49

DGCN

Python
9
star
50

PySA

Pyramid Self-Attention for Semantic Segmentation
8
star
51

EM-OLN

Python
7
star
52

BCF

Xinggang Wang, Bin Feng, Xiang Bai, Wenyu Liu, and Longin Jan Latecki. Bag of Contour Fragments for Robust Shape Classification. Pattern Recognition, Volume 47, Issue 6, June 2014, Pages 2116-2125.
MATLAB
6
star
53

DiG

Python
3
star
54

TOGS

The official code of "TOGS: Gaussian Splatting with Temporal Opacity Offset for Real-Time 4D DSA Rendering"
Python
2
star
55

tbcl

1
star
56

DeepTunel

Python
1
star