π¦ : Towards Grand Unification of Object Tracking
Unicorn
This repository is the project page for the paper Towards Grand Unification of Object Tracking
Highlight
- Unicorn is accepted to ECCV 2022 as an oral presentation!
- Unicorn first demonstrates grand unification for four object-tracking tasks.
- Unicorn achieves strong performance in eight tracking benchmarks.
Introduction
-
The object tracking field mainly consists of four sub-tasks: Single Object Tracking (SOT), Multiple Object Tracking (MOT), Video Object Segmentation (VOS), and Multi-Object Tracking and Segmentation (MOTS). Most previous approaches are developed for only one of or part of the sub-tasks.
-
For the first time, Unicorn accomplishes the great unification of the network architecture and the learning paradigm for four tracking tasks. Besides, Unicorn puts forwards new state-of-the-art performance on many challenging tracking benchmarks using the same model parameters.
This repository supports the following tasks:
Image-level
- Object Detection
- Instance Segmentation
Video-level
- Single Object Tracking (SOT)
- Multiple Object Tracking (MOT)
- Video Object Segmentation (VOS)
- Multi-Object Tracking and Segmentation (MOTS)
Demo
Unicorn conquers four tracking tasks (SOT, MOT, VOS, MOTS) using the same network with the same parameters.
video_demo_unicorn.mp4
Results
SOT
MOT (MOT17)
MOT (BDD100K)
VOS
MOTS (MOTS Challenge)
MOTS (BDD100K MOTS)
Getting started
- Installation: Please refer to install.md for more details.
- Data preparation: Please refer to data.md for more details.
- Training: Please refer to train.md for more details.
- Testing: Please refer to test.md for more details.
- Model zoo: Please refer to model_zoo.md for more details.
Citing Unicorn
If you find Unicorn useful in your research, please consider citing:
@inproceedings{unicorn,
title={Towards Grand Unification of Object Tracking},
author={Yan, Bin and Jiang, Yi and Sun, Peize and Wang, Dong and Yuan, Zehuan and Luo, Ping and Lu, Huchuan},
booktitle={ECCV},
year={2022}
}
Acknowledgments
- Thanks YOLOX and CondInst for providing strong baseline for object detection and instance segmentation.
- Thanks STARK and PyTracking for providing useful inference and evaluation toolkits for SOT and VOS.
- Thanks ByteTrack, QDTrack and PCAN for providing useful data-processing scripts and evalution codes for MOT and MOTS.