• Stars
    star
    2,734
  • Rank 16,652 (Top 0.4 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created 10 months ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[ICML 2024] Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model

Vision Mamba

Efficient Visual Representation Learning with Bidirectional State Space Model

Lianghui Zhu1 *,Bencheng Liao1 *,Qian Zhang2, Xinlong Wang3, Wenyu Liu1, Xinggang Wang1 📧

1 Huazhong University of Science and Technology, 2 Horizon Robotics, 3 Beijing Academy of Artificial Intelligence

(*) equal contribution, (📧) corresponding author.

ArXiv Preprint (arXiv 2401.09417), HuggingFace Page (🤗 2401.09417)

News

  • Feb. 10th, 2024: We update Vim-tiny/small weights and training scripts. By placing the class token at middle, Vim achieves improved results. Further details can be found in code and our updated arXiv.

  • Jan. 18th, 2024: We released our paper on Arxiv. Code/Models are coming soon. Please stay tuned! ☕️

Abstract

Recently the state space models (SSMs) with efficient hardware-aware designs, i.e., the Mamba deep learning model, have shown great potential for long sequence modeling. Meanwhile building efficient and generic vision backbones purely upon SSMs is an appealing direction. However, representing visual data is challenging for SSMs due to the position-sensitivity of visual data and the requirement of global context for visual understanding. In this paper, we show that the reliance on self-attention for visual representation learning is not necessary and propose a new generic vision backbone with bidirectional Mamba blocks (Vim), which marks the image sequences with position embeddings and compresses the visual representation with bidirectional state space models. On ImageNet classification, COCO object detection, and ADE20k semantic segmentation tasks, Vim achieves higher performance compared to well-established vision transformers like DeiT, while also demonstrating significantly improved computation & memory efficiency. For example, Vim is 2.8x faster than DeiT and saves 86.8% GPU memory when performing batch inference to extract features on images with a resolution of 1248x1248. The results demonstrate that Vim is capable of overcoming the computation & memory constraints on performing Transformer-style understanding for high-resolution images and it has great potential to be the next-generation backbone for vision foundation models.

Overview

Envs. for Pretraining

  • Python 3.10.13

    • conda create -n your_env_name python=3.10.13
  • torch 2.1.1 + cu118

    • pip install torch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 --index-url https://download.pytorch.org/whl/cu118
  • Requirements: vim_requirements.txt

    • pip install -r vim/vim_requirements.txt
  • Install causal_conv1d and mamba

    • pip install -e causal_conv1d>=1.1.0
    • pip install -e mamba-1p1p1

Train Your Vim

bash vim/scripts/pt-vim-t.sh

Train Your Vim at Finer Granularity

bash vim/scripts/ft-vim-t.sh

Model Weights

Model #param. Top-1 Acc. Top-5 Acc. Hugginface Repo
Vim-tiny 7M 76.1 93.0 https://huggingface.co/hustvl/Vim-tiny-midclstok
Vim-tiny+ 7M 78.3 94.2 https://huggingface.co/hustvl/Vim-tiny-midclstok
Vim-small 26M 80.5 95.1 https://huggingface.co/hustvl/Vim-small-midclstok
Vim-small+ 26M 81.6 95.4 https://huggingface.co/hustvl/Vim-small-midclstok

Notes:

  • + means that we finetune at finer granularity with short schedule.

Evaluation on Provided Weights

To evaluate Vim-Ti on ImageNet-1K, run:

python main.py --eval --resume /path/to/ckpt --model vim_tiny_patch16_224_bimambav2_final_pool_mean_abs_pos_embed_with_midclstok_div2 --data-path /path/to/imagenet

Acknowledgement ❤️

This project is based on Mamba (paper, code), Causal-Conv1d (code), DeiT (paper, code). Thanks for their wonderful works.

Citation

If you find Vim is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.

 @article{vim,
  title={Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model},
  author={Lianghui Zhu and Bencheng Liao and Qian Zhang and Xinlong Wang and Wenyu Liu and Xinggang Wang},
  journal={arXiv preprint arXiv:2401.09417},
  year={2024}
}

More Repositories

1

4DGaussians

[CVPR 2024] 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering
Jupyter Notebook
2,115
star
2

YOLOP

You Only Look Once for Panopitic Driving Perception.(MIR2022)
Python
1,906
star
3

MapTR

[ICLR'23 Spotlight] MapTR: Structured Modeling and Learning for Online Vectorized HD Map Construction
Python
1,034
star
4

YOLOS

[NeurIPS 2021] You Only Look at One Sequence
Jupyter Notebook
826
star
5

GaussianDreamer

GaussianDreamer: Fast Generation from Text to 3D Gaussians by Bridging 2D and 3D Diffusion Models (CVPR 2024)
Python
632
star
6

VAD

[ICCV 2023] VAD: Vectorized Scene Representation for Efficient Autonomous Driving
Python
628
star
7

SparseInst

[CVPR 2022] SparseInst: Sparse Instance Activation for Real-Time Instance Segmentation
Python
558
star
8

Matte-Anything

[Image and Vision Computing (Vol.147 Jul. '24)] Interactive Natural Image Matting with Segment Anything Models
Python
473
star
9

QueryInst

[ICCV 2021] Instances as Queries
Python
402
star
10

TopFormer

TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation, CVPR2022
Python
375
star
11

MIMDet

[ICCV 2023] You Only Look at One Partial Sequence
Python
336
star
12

TiNeuVox

TiNeuVox: Fast Dynamic Radiance Fields with Time-Aware Neural Voxels (SIGGRAPH Asia 2022)
Python
322
star
13

ViTMatte

[Information Fusion] Boosting Image Matting with Pretrained Plain Vision Transformers
Python
245
star
14

TeViT

Temporally Efficient Vision Transformer for Video Instance Segmentation, CVPR 2022, Oral
Python
237
star
15

GKT

Efficient and Robust 2D-to-BEV Representation Learning via Geometry-guided Kernel Transformer
Python
218
star
16

BMaskR-CNN

[ECCV 2020] Boundary-preserving Mask R-CNN
Python
184
star
17

HAIS

Hierarchical Aggregation for 3D Instance Segmentation (ICCV 2021)
Python
163
star
18

Symphonies

[CVPR 2024] Symphonies (Scene-from-Insts): Symphonize 3D Semantic Scene Completion with Contextual Instance Queries
Python
160
star
19

VMA

A general map auto annotation framework based on MapTR, with high flexibility in terms of spatial scale and element type
Python
157
star
20

WeakTr

WeakTr: Exploring Plain Vision Transformer for Weakly-supervised Semantic Segmentation
Python
122
star
21

LaneGAP

[ECCV 2024] Lane Graph as Path: Continuity-preserving Path-wise Modeling for Online Lane Graph Construction
114
star
22

SparseTrack

Official PyTorch implementation of SparseTrack (the new version of code will come soon)
Python
108
star
23

CrossVIS

[ICCV 2021] Crossover Learning for Fast Online Video Instance Segmentation
Python
85
star
24

MSG-Transformer

MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens (CVPR 2022)
Python
80
star
25

PolarDETR

73
star
26

BoxTeacher

[CVPR 2023] Exploring High-Quality Pseudo Masks for Weakly Supervised Instance Segmentation
Python
72
star
27

TinyDet

Python
68
star
28

osp

[ECCV 2024] Occupancy as Set of Points
Python
63
star
29

GNeuVox

GNeuVox: Generalizable Neural Voxels for Fast Human Radiance Fields
Python
60
star
30

AziNorm

AziNorm: Exploiting the Radial Symmetry of Point Cloud for Azimuth-Normalized 3D Perception, CVPR 2022.
Python
53
star
31

Featurized-QueryRCNN

Featurized Query R-CNN
Python
46
star
32

RILS

[CVPR 2023] RILS: Masked Visual Reconstruction in Language Semantic Space (https://arxiv.org/abs/2301.06958)
Python
43
star
33

PD-Quant

[CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric
Python
39
star
34

MIM4D

MIM4D: Masked Modeling with Multi-View Video for Autonomous Driving Representation Learning
36
star
35

NeuSample

Code of "NeuSample: Neural Sample Field for Efficient View Synthesis"
Python
36
star
36

SAUNet

A Simple Adaptive Unfolding Network for Hyperspectral Image Reconstruction
Python
29
star
37

Query6DoF

Query6DoF: Learning Sparse Queries as Implicit Shape Prior for Category-Level 6DoF Pose Estimation
Python
25
star
38

HDR-HexPlane

3DV 2024: Fast High Dynamic Range Radiance Fields for Dynamic Scenes
Python
25
star
39

WeakSAM

WeakSAM: Segment Anything Meets Weakly-supervised Instance-level Recognition
Python
24
star
40

ViTGaze

Python
23
star
41

CircuitFormer

[NeurIPS 2023] CircuitFormer: Circuit as Set of Points
Python
23
star
42

EfficientPose

Cuda
20
star
43

MMIL-Transformer

Python
20
star
44

LSFA

Real-Time and Accurate Object Detection in Compressed Video by Long Short-term Feature Aggregation
Python
19
star
45

OpenInst

Python
14
star
46

BoxCaseg

Jupyter Notebook
14
star
47

mancs

Mancs: A multi-task attentional network with curriculum sampling for person re-identification
Python
12
star
48

RND-SCI

A Range-Null Space Decomposition Approach for Fast and Flexible Spectral Compressive Imaging
Python
10
star
49

DGCN

Python
9
star
50

PySA

Pyramid Self-Attention for Semantic Segmentation
8
star
51

EM-OLN

Python
7
star
52

BCF

Xinggang Wang, Bin Feng, Xiang Bai, Wenyu Liu, and Longin Jan Latecki. Bag of Contour Fragments for Robust Shape Classification. Pattern Recognition, Volume 47, Issue 6, June 2014, Pages 2116-2125.
MATLAB
6
star
53

DiG

Python
3
star
54

TOGS

The official code of "TOGS: Gaussian Splatting with Temporal Opacity Offset for Real-Time 4D DSA Rendering"
Python
2
star
55

tbcl

1
star
56

DeepTunel

Python
1
star