• Stars
    star
    189
  • Rank 203,489 (Top 5 %)
  • Language
    Python
  • License
    Other
  • Created over 1 year ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Implementation of PF-Track

[CVPR 2023] PF-Track: End-to-end Vision-centric 3D MOT with Minimal ID-Switches

Ziqi Pang, Jie Li, Pavel Tokmakov, Dian Chen, Sergey Zagoruyko, Yu-Xiong Wang

Introduction

This is the official implementation of "Standing Between Past and Future: Spatio-Temporal Modeling for Multi-Camera 3D Multi-Object Tracking." Our PF-Track illustrates significant advantages in:

  • Dramatically less ID-Switches: PF-Track has 90% less ID-Switches compared to previous methods. So far, PF-Track is also SOTA in ID-Switches on nuScenes.
  • End-to-end perception and prediction: PF-Track emulates an end-to-end framework.
  • Easy integration with detection heads: PF-Track can cooperate with various DETR-style 3D detection heads.

Please click the gif below to check our full demo and reach out to Ziqi Pang if you are interested. Our method seamlessly address occlusions and hand-over between cameras.

Demo video

If you find our code or paper useful, please cite by:

@inproceedings{pang2023standing,
  title={Standing Between Past and Future: Spatio-Temporal Modeling for Multi-Camera 3D Multi-Object Tracking},
  author={Pang, Ziqi and Li, Jie and Tokmakov, Pavel and Chen, Dian and Zagoruyko, Sergey and Wang, Yu-Xiong},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2023}
}

Getting Started

Please follow our documentation step by step. For the convenience of developers and researchers, we also add notes for developers to better convey the implementations of PF-Track and accelerate your adaptation of our framework. If you like my documentation and help, please recommend our work to your colleagues and friends.

  1. Pretrained models and data files.
  2. Environment Setup.
  3. Preprocessing nuScenes.
  4. Training.
  5. Inference.

Guide for Developers and Researchers

It literally took us THREE MONTHS to implement the baseline because designing the end-to-end tracking and prediction framework is challenging. Therefore, we write the following documents to help you better understand our design choices, read the code, and adapt them to your own tasks and datasets.

  1. System Overview: An ABC Guide to End-to-end MOT. (Please skim through it even if you know end-to-end MOT well, because we clarify several implementation details that are non-trivial.)
  2. Visualization tools.
  3. Integration with various detection heads.

Acknowledgements

We thank the contributors to the following open-source projects. Our project is impossible without the inspirations from these excellent researchers and engineers.

License

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

More Repositories

1

packnet-sfm

TRI-ML Monocular Depth Estimation Repository
Python
1,207
star
2

vidar

Python
549
star
3

DDAD

Dense Depth for Autonomous Driving (DDAD) dataset.
Python
490
star
4

dd3d

Official PyTorch implementation of DD3D: Is Pseudo-Lidar needed for Monocular 3D Object detection? (ICCV 2021), Dennis Park*, Rares Ambrus*, Vitor Guizilini, Jie Li, and Adrien Gaidon.
Python
460
star
5

prismatic-vlms

A flexible and efficient codebase for training visually-conditioned language models (VLMs)
Python
351
star
6

KP3D

Code for "Self-Supervised 3D Keypoint Learning for Ego-motion Estimation"
Python
239
star
7

KP2D

Python
176
star
8

sdflabel

Official PyTorch implementation of CVPR 2020 oral "Autolabeling 3D Objects With Differentiable Rendering of SDF Shape Priors"
Python
159
star
9

realtime_panoptic

Official PyTorch implementation of CVPR 2020 Oral: Real-Time Panoptic Segmentation from Dense Detections
Python
112
star
10

permatrack

Implementation for Learning to Track with Object Permanence
Python
111
star
11

camviz

Visualization Library
Python
99
star
12

dgp

ML Dataset Governance Policy for Autonomous Vehicle Datasets
Python
93
star
13

VEDet

Python
37
star
14

RAP

This is the official code for the paper RAP: Risk-Aware Prediction for Robust Planning: https://arxiv.org/abs/2210.01368
Python
31
star
15

VOST

Code for the VOST dataset
Python
20
star
16

RAM

Implementation for Object Permanence Emerges in a Random Walk along Memory
Python
18
star
17

road

ROAD: Learning an Implicit Recursive Octree Auto-Decoder to Efficiently Encode 3D Shapes (CoRL 2022)
Python
11
star
18

efm_datasets

TRI-ML Embodied Foundation Datasets
Python
7
star
19

refine

Official PyTorch implementation of the SIGGRAPH 2024 paper "ReFiNe: Recursive Field Networks for Cross-Modal Multi-Scene Representation"
Python
5
star
20

HAICU

4
star
21

binomial_cis

Computation of binomial confidence intervals that achieve exact coverage.
Jupyter Notebook
3
star
22

stochastic_verification

Official repository for the paper "How Generalizable Is My Behavior Cloning Policy? A Statistical Approach to Trustworthy Performance Evaluation"
Python
3
star
23

vlm-evaluation

VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioning
Python
1
star