Universal Instance Perception as Object Discovery and Retrieval
This is the official implementation of the paper Universal Instance Perception as Object Discovery and Retrieval.
News
- 🏆 We are the runner-up in Segmentation in the Wild challenge.
🏆 We are the winner of BDD100K MOT Challenge and the runner-up of BDD MOTS Challenge on CVPR2023 workshop.
Highlight
- UNINEXT is accepted by CVPR2023.
- UNINEXT reformulates diverse instance perception tasks into a unified object discovery and retrieval paradigm and can flexibly perceive different types of objects by simply changing the input prompts.
- UNINEXT achieves superior performance on 20 challenging benchmarks using a single model with the same model parameters.
Introduction
Object-centric understanding is one of the most essential and challenging problems in computer vision. In this work, we mainly discuss 10 sub-tasks, distributed on the vertices of the cube shown in the above figure. Since all these tasks aim to perceive instances of certain properties, UNINEXT reorganizes them into three types according to the different input prompts:
- Category Names
- Object Detection
- Instance Segmentation
- Multiple Object Tracking (MOT)
- Multi-Object Tracking and Segmentation (MOTS)
- Video Instance Segmentation (VIS)
- Language Expressions
- Referring Expression Comprehension (REC)
- Referring Expression Segmentation (RES)
- Referring Video Object Segmentation (R-VOS)
- Target Annotations
- Single Object Tracking (SOT)
- Video Object Segmentation (VOS)
Then we propose a unified prompt-guided object discovery and retrieval formulation to solve all the above tasks. Extensive experiments demonstrate that UNINEXT achieves superior performance on 20 challenging benchmarks.
Demo
UNINEXT_DEMO_VID_9M.mp4
UNINEXT can flexibly perceive various types of objects by simply changing the input prompts, such as category names, language expressions, and target annotations. We also provide a simple demo script, which supports 4 image-level tasks (object detection, instance segmentation, REC, RES).
Results
Retrieval by Category Names
Retrieval by Language Expressions
Retrieval by Target Annotations
Getting started
- Installation: Please refer to INSTALL.md for more details.
- Data preparation: Please refer to DATA.md for more details.
- Training: Please refer to TRAIN.md for more details.
- Testing: Please refer to TEST.md for more details.
- Model zoo: Please refer to MODEL_ZOO.md for more details.
Citing UNINEXT
If you find UNINEXT useful in your research, please consider citing:
@inproceedings{UNINEXT,
title={Universal Instance Perception as Object Discovery and Retrieval},
author={Yan, Bin and Jiang, Yi and Wu, Jiannan and Wang, Dong and Yuan, Zehuan and Luo, Ping and Lu, Huchuan},
booktitle={CVPR},
year={2023}
}
Acknowledgments
- Thanks Unicorn for providing experience of unifying four object tracking tasks (SOT, MOT, VOS, MOTS).
- Thanks VNext for providing experience of Video Instance Segmentation (VIS).
- Thanks ReferFormer for providing experience of REC, RES, and R-VOS.
- Thanks GLIP for the idea of unifying object detection and phrase grounding.
- Thanks Detic for the implementation of multi-dataset training.
- Thanks detrex for the implementation of denoising mechnism.