• Stars
    star
    2,502
  • Rank 18,352 (Top 0.4 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 2 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[CVPR 2023 Highlight] InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions

[中文版本]

We currently receive a bunch of issues, our team will check and solve them one by one, please stay tuned.

INTERN-2.5: Multimodal Multitask General Large Model

PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC

The official implementation of

InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions.

[Paper] [Blog in Chinese]

Highlights

  • 👍 The strongest open-source visual universal backbone model with up to 3 billion parameters
  • 🏆 Achieved 90.1% Top1 accuracy in ImageNet, the most accurate among open-source models
  • 🏆 Achieved 65.5 mAP on the COCO benchmark dataset for object detection, the only model that exceeded 65.0 mAP

Related Projects

Foundation Models

  • Uni-Perceiver: A Pre-training unified architecture for generic perception for zero-shot and few-shot tasks
  • Uni-Perceiver v2: A generalist model for large-scale vision and vision-language tasks
  • M3I-Pretraining: One-stage pre-training paradigm via maximizing multi-modal mutual information

Autonomous Driving

  • BEVFormer: A cutting-edge baseline for camera-based 3D detection
  • BEVFormer v2: Adapting modern image backbones to Bird's-Eye-View recognition via perspective supervision

Application in Challenges

News

  • Mar 14, 2023: 🚀 "INTERN-2.5" is released!
  • Feb 28, 2023: 🚀 InternImage is accepted to CVPR 2023!
  • Nov 18, 2022: 🚀 InternImage-XL merged into BEVFormer v2 achieves state-of-the-art performance of 63.4 NDS on nuScenes Camera Only.
  • Nov 10, 2022: 🚀 InternImage-H achieves a new record 65.4 mAP on COCO detection test-dev and 62.9 mIoU on ADE20K, outperforming previous models by a large margin.

History

  • Models/APIs for other downstream tasks
  • Support CVPR 2023 Workshop on End-to-End Autonomous Driving, see here
  • Support Segment Anything
  • Support extracting intermediate features, see here
  • Low-cost training with DeepSpeed, see here
  • Compiling-free .whl package of DCNv3 operator, see here
  • InternImage-H(1B)/G(3B)
  • TensorRT inference for classification/detection/segmentation models
  • Classification code of the InternImage series
  • InternImage-T/S/B/L/XL ImageNet-1K pretrained model
  • InternImage-L/XL ImageNet-22K pretrained model
  • InternImage-T/S/B/L/XL detection and instance segmentation model
  • InternImage-T/S/B/L/XL semantic segmentation model

Introduction

"INTERN-2.5" is a powerful multimodal multitask general model jointly released by SenseTime and Shanghai AI Laboratory. It consists of large-scale vision foundation model "InternImage", pre-training method "M3I-Pretraining", generic decoder "Uni-Perceiver" series, and generic encoder for autonomous driving perception "BEVFormer" series.

Applications

🌅 Image Modality Tasks

"INTERN-2.5" achieved an impressive Top-1 accuracy of 90.1% on the ImageNet benchmark dataset using only publicly available data for image classification. Apart from two undisclosed models trained with additional datasets by Google and Microsoft, "INTERN-2.5" is the only open-source model that achieves a Top-1 accuracy of over 90.0%, and it is also the largest model in scale worldwide.

"INTERN-2.5" outperformed all other models worldwide on the COCO object detection benchmark dataset with a remarkable mAP of 65.5, making it the only model that surpasses 65 mAP in the world.

"INTERN-2.5" also demonstrated world's best performance on 16 other important visual benchmark datasets, covering a wide range of tasks such as classification, detection, and segmentation, making it the top-performing model across multiple domains.

Performance

  • Classification
Image Classification Scene Classification Long-Tail Classification
ImageNetPlaces365Places 205iNaturalist 2018
90.161.271.792.3
  • Detection
Conventional Object DetectionLong-Tail Object Detection Autonomous Driving Object DetectionDense Object Detection
COCOVOC 2007VOC 2012OpenImageLVIS minivalLVIS valBDD100KnuScenesCrowdHuman
65.594.097.274.165.863.238.864.897.2
  • Segmentation
Semantic SegmentationStreet SegmentationRGBD Segmentation
ADE20KCOCO Stuff-10KPascal ContextCityScapesNYU Depth V2
62.959.670.386.169.7

🌁 📖 Image and Text Cross-Modal Tasks

Image-Text Retrieval: "INTERN-2.5" can quickly locate and retrieve the most semantically relevant images based on textual content requirements. This capability can be applied to both videos and image collections and can be further combined with object detection boxes to enable a variety of applications, helping users quickly and easily find the required image resources. For example, it can return the relevant images specified by the text in the album.

Image-To-Text: "INTERN-2.5" has a strong understanding capability in various aspects of visual-to-text tasks such as image captioning, visual question answering, visual reasoning, and optical character recognition. For example, in the context of autonomous driving, it can enhance the scene perception and understanding capabilities, assist the vehicle in judging traffic signal status, road signs, and other information, and provide effective perception information support for vehicle decision-making and planning.

Performance

Image CaptioningFine-tuning Image-Text RetrievalZero-shot Image-Text Retrieval
COCO CaptionCOCO CaptionFlickr30kFlickr30k
148.276.494.889.1

Released Models

Open-source Visual Pretrained Models
name pretrain pre-training resolution #param download
InternImage-L ImageNet-22K 384x384 223M ckpt
InternImage-XL ImageNet-22K 384x384 335M ckpt
InternImage-H Joint 427M 384x384 1.08B ckpt
InternImage-G - 384x384 3B ckpt
ImageNet-1K Image Classification
name pretrain resolution acc@1 #param FLOPs download
InternImage-T ImageNet-1K 224x224 83.5 30M 5G ckpt | cfg
InternImage-S ImageNet-1K 224x224 84.2 50M 8G ckpt | cfg
InternImage-B ImageNet-1K 224x224 84.9 97M 16G ckpt | cfg
InternImage-L ImageNet-22K 384x384 87.7 223M 108G ckpt | cfg
InternImage-XL ImageNet-22K 384x384 88.0 335M 163G ckpt | cfg
InternImage-H Joint 427M 640x640 89.6 1.08B 1478G ckpt | cfg
InternImage-G - 512x512 90.1 3B 2700G ckpt | cfg
COCO Object Detection and Instance Segmentation
backbone method schd box mAP mask mAP #param FLOPs download
InternImage-T Mask R-CNN 1x 47.2 42.5 49M 270G ckpt | cfg
InternImage-T Mask R-CNN 3x 49.1 43.7 49M 270G ckpt | cfg
InternImage-S Mask R-CNN 1x 47.8 43.3 69M 340G ckpt | cfg
InternImage-S Mask R-CNN 3x 49.7 44.5 69M 340G ckpt | cfg
InternImage-B Mask R-CNN 1x 48.8 44.0 115M 501G ckpt | cfg
InternImage-B Mask R-CNN 3x 50.3 44.8 115M 501G ckpt | cfg
InternImage-L Cascade 1x 54.9 47.7 277M 1399G ckpt | cfg
InternImage-L Cascade 3x 56.1 48.5 277M 1399G ckpt | cfg
InternImage-XL Cascade 1x 55.3 48.1 387M 1782G ckpt | cfg
InternImage-XL Cascade 3x 56.2 48.8 387M 1782G ckpt | cfg
backbone method box mAP (val/test) #param FLOPs download
InternImage-H DINO (TTA) 65.0 / 65.4 2.18B TODO TODO
InternImage-G DINO (TTA) 65.3 / 65.5 3B TODO TODO
ADE20K Semantic Segmentation
backbone method resolution mIoU (ss/ms) #param FLOPs download
InternImage-T UperNet 512x512 47.9 / 48.1 59M 944G ckpt | cfg
InternImage-S UperNet 512x512 50.1 / 50.9 80M 1017G ckpt | cfg
InternImage-B UperNet 512x512 50.8 / 51.3 128M 1185G ckpt | cfg
InternImage-L UperNet 640x640 53.9 / 54.1 256M 2526G ckpt | cfg
InternImage-XL UperNet 640x640 55.0 / 55.3 368M 3142G ckpt | cfg
InternImage-H UperNet 896x896 59.9 / 60.3 1.12B 3566G ckpt | cfg
InternImage-H Mask2Former 896x896 62.5 / 62.9 1.31B 4635G ckpt | cfg
Main Results of FPS

Export classification model from pytorch to tensorrt

Export detection model from pytorch to tensorrt

Export segmentation model from pytorch to tensorrt

name resolution #param FLOPs batch 1 FPS (TensorRT)
InternImage-T 224x224 30M 5G 156
InternImage-S 224x224 50M 8G 129
InternImage-B 224x224 97M 16G 116
InternImage-L 384x384 223M 108G 56
InternImage-XL 384x384 335M 163G 47

Before using mmdeploy to convert our PyTorch models to TensorRT, please make sure you have the DCNv3 custom operator builded correctly. You can build it with the following command:

export MMDEPLOY_DIR=/the/root/path/of/MMDeploy

# prepare our custom ops, you can find it at InternImage/tensorrt/modulated_deform_conv_v3
cp -r modulated_deform_conv_v3 ${MMDEPLOY_DIR}/csrc/mmdeploy/backend_ops/tensorrt

# build custom ops
cd ${MMDEPLOY_DIR}
mkdir -p build && cd build
cmake -DCMAKE_CXX_COMPILER=g++-7 -DMMDEPLOY_TARGET_BACKENDS=trt -DTENSORRT_DIR=${TENSORRT_DIR} -DCUDNN_DIR=${CUDNN_DIR} ..
make -j$(nproc) && make install

# install the mmdeploy after building custom ops
cd ${MMDEPLOY_DIR}
pip install -e .

For more details on building custom ops, please refering to this document.

Citations

If this work is helpful for your research, please consider citing the following BibTeX entry.

@article{wang2022internimage,
  title={InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions},
  author={Wang, Wenhai and Dai, Jifeng and Chen, Zhe and Huang, Zhenhang and Li, Zhiqi and Zhu, Xizhou and Hu, Xiaowei and Lu, Tong and Lu, Lewei and Li, Hongsheng and others},
  journal={arXiv preprint arXiv:2211.05778},
  year={2022}
}

@inproceedings{zhu2022uni,
  title={Uni-perceiver: Pre-training unified architecture for generic perception for zero-shot and few-shot tasks},
  author={Zhu, Xizhou and Zhu, Jinguo and Li, Hao and Wu, Xiaoshi and Li, Hongsheng and Wang, Xiaohua and Dai, Jifeng},
  booktitle={CVPR},
  pages={16804--16815},
  year={2022}
}

@article{zhu2022uni,
  title={Uni-perceiver-moe: Learning sparse generalist models with conditional moes},
  author={Zhu, Jinguo and Zhu, Xizhou and Wang, Wenhai and Wang, Xiaohua and Li, Hongsheng and Wang, Xiaogang and Dai, Jifeng},
  journal={arXiv preprint arXiv:2206.04674},
  year={2022}
}

@article{li2022uni,
  title={Uni-Perceiver v2: A Generalist Model for Large-Scale Vision and Vision-Language Tasks},
  author={Li, Hao and Zhu, Jinguo and Jiang, Xiaohu and Zhu, Xizhou and Li, Hongsheng and Yuan, Chun and Wang, Xiaohua and Qiao, Yu and Wang, Xiaogang and Wang, Wenhai and others},
  journal={arXiv preprint arXiv:2211.09808},
  year={2022}
}

@article{yang2022bevformer,
  title={BEVFormer v2: Adapting Modern Image Backbones to Bird's-Eye-View Recognition via Perspective Supervision},
  author={Yang, Chenyu and Chen, Yuntao and Tian, Hao and Tao, Chenxin and Zhu, Xizhou and Zhang, Zhaoxiang and Huang, Gao and Li, Hongyang and Qiao, Yu and Lu, Lewei and others},
  journal={arXiv preprint arXiv:2211.10439},
  year={2022}
}

@article{su2022towards,
  title={Towards All-in-one Pre-training via Maximizing Multi-modal Mutual Information},
  author={Su, Weijie and Zhu, Xizhou and Tao, Chenxin and Lu, Lewei and Li, Bin and Huang, Gao and Qiao, Yu and Wang, Xiaogang and Zhou, Jie and Dai, Jifeng},
  journal={arXiv preprint arXiv:2211.09807},
  year={2022}
}

@inproceedings{li2022bevformer,
  title={Bevformer: Learning bird’s-eye-view representation from multi-camera images via spatiotemporal transformers},
  author={Li, Zhiqi and Wang, Wenhai and Li, Hongyang and Xie, Enze and Sima, Chonghao and Lu, Tong and Qiao, Yu and Dai, Jifeng},
  booktitle={ECCV},
  pages={1--18},
  year={2022},
}

More Repositories

1

InternVL

[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
Python
5,753
star
2

LLaMA-Adapter

[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
Python
5,717
star
3

DragGAN

Unofficial Implementation of DragGAN - "Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold" (DragGAN 全功能实现,在线Demo,本地部署试用,代码、模型已全部开源,支持Windows, macOS, Linux)
Python
4,996
star
4

InternGPT

InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)
Python
3,198
star
5

Ask-Anything

[CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
Python
2,984
star
6

InternVideo

[ECCV2024] Video Foundation Models & Data for Multimodal Understanding
Python
1,392
star
7

VisionLLM

VisionLLM Series
Python
874
star
8

VideoMamba

[ECCV2024] VideoMamba: State Space Model for Efficient Video Understanding
Python
787
star
9

OmniQuant

[ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.
Python
691
star
10

VideoMAEv2

[CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking
Python
486
star
11

DCNv4

[CVPR 2024] Deformable Convolution v4
Python
463
star
12

all-seeing

[ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of the Open World"
Python
452
star
13

GITM

Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory
445
star
14

Multi-Modality-Arena

Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!
Python
428
star
15

Vision-RWKV

Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like Architectures
Python
352
star
16

CaFo

[CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners
Python
344
star
17

PonderV2

PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm
Python
311
star
18

LAMM

[NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents
Python
296
star
19

UniFormerV2

[ICCV2023] UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer
Python
280
star
20

unmasked_teacher

[ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models
Python
276
star
21

OmniCorpus

OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text
Python
259
star
22

HumanBench

This repo is official implementation of HumanBench (CVPR2023)
Python
231
star
23

Instruct2Act

Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model
Python
223
star
24

EfficientQAT

EfficientQAT: Efficient Quantization-Aware Training for Large Language Models
Python
198
star
25

gv-benchmark

General Vision Benchmark, GV-B, a project from OpenGVLab
Python
189
star
26

ControlLLM

ControlLLM: Augment Language Models with Tools by Searching on Graphs
Python
181
star
27

InternVideo2

152
star
28

UniHCP

Official PyTorch implementation of UniHCP
Python
149
star
29

efficient-video-recognition

Python
114
star
30

SAM-Med2D

Official implementation of SAM-Med2D
Jupyter Notebook
114
star
31

EgoVideo

[CVPR 2024 Champions] Solutions for EgoVis Chanllenges in CVPR 2024
Jupyter Notebook
103
star
32

DiffRate

[ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging techniques, while incorporating a differentiable compression rate.
Jupyter Notebook
86
star
33

MMT-Bench

ICML'2024 | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI
Python
85
star
34

Awesome-DragGAN

Awesome-DragGAN: A curated list of papers, tutorials, repositories related to DragGAN
75
star
35

MM-NIAH

This is the official implementation of the paper "Needle In A Multimodal Haystack"
Python
70
star
36

M3I-Pretraining

69
star
37

STM-Evaluation

Python
69
star
38

MUTR

[AAAI 2024] Referred by Multi-Modality: A Unified Temporal Transformers for Video Object Segmentation
Python
65
star
39

LCL

Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning
Python
63
star
40

ChartAst

ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.
Python
60
star
41

LORIS

Long-Term Rhythmic Video Soundtracker, ICML2023
Python
54
star
42

DDPS

Official Implementation of "Denoising Diffusion Semantic Segmentation with Mask Prior Modeling"
Python
53
star
43

Awesome-LLM4Tool

A curated list of the papers, repositories, tutorials, and anythings related to the large language models for tools
52
star
44

PIIP

NeurIPS 2024 Spotlight ⭐️ Parameter-Inverted Image Pyramid Networks (PIIP)
Python
51
star
45

InternVL-MMDetSeg

Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed
Jupyter Notebook
50
star
46

GUI-Odyssey

GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes from 6 mobile devices, spanning 6 types of cross-app tasks, 201 apps, and 1.4K app combos.
Python
47
star
47

Siamese-Image-Modeling

[CVPR 2023]Implementation of Siamese Image Modeling for Self-Supervised Vision Representation Learning
Python
33
star
48

De-focus-Attention-Networks

Learning 1D Causal Visual Representation with De-focus Attention Networks
Python
28
star
49

Multitask-Model-Selector

Implementation of Foundation Model is Efficient Multimodal Multitask Model Selector
Python
27
star
50

Official-ConvMAE-Det

Python
13
star
51

perception_test_iccv2023

Champion Solutions repository for Perception Test challenges in ICCV2023 workshop.
Python
13
star
52

opengvlab.github.io

12
star
53

MovieMind

9
star
54

EmbodiedGPT

5
star
55

DriveMLM

3
star
56

.github

2
star