• Stars
    star
    119
  • Rank 297,930 (Top 6 %)
  • Language
    Python
  • License
    MIT License
  • Created over 2 years ago
  • Updated 6 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

ViP3D: End-to-end Visual Trajectory Prediction via 3D Agent Queries (CVPR 2023)

  • This is the official repository of the paper: ViP3D: End-to-end Visual Trajectory Prediction via 3D Agent Queries (CVPR 2023).

Installation

Use the following commands to prepare the python environment.

1) Create conda environment

conda create -n vip3d python=3.6

Supported python versions are 3.6, 3.7, 3.8.

2) Install pytorch

conda activate vip3d
pip install torch==1.10+cu111 torchvision==0.11.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html

3) Install mmcv, mmdet

pip install mmcv-full==1.4.0 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.10/index.html
pip install mmdet==2.24.1

4) Install other packages

pip install -r requirements.txt

5) Install mmdet3d

cd ~
git clone https://github.com/open-mmlab/mmdetection3d.git
cd mmdetection3d
git checkout v0.17.1 # Other versions may not be compatible.
python setup.py install
pip install -r requirements/runtime.txt  # Install packages for mmdet3d

Quick start with Docker (Optional)

We also provide a docker image of ViP3D, which has installed all required packages. The docker image is built from NVIDIA container image for PyTorch. Make sure you have installed docker and nvidia docker.

docker pull gentlesmile/vip3d
docker run --name vip3d_container -it --gpus all --ipc=host gentlesmile/vip3d

Prepare Dataset

1) Download nuScenes full dataset (v1.0) and map expansion here.

Only need to download Keyframe blobs and Radar blobs.

2) Structure

After downloading, the structure is as follows:

ViP3D
β”œβ”€β”€ mmdet3d/
β”œβ”€β”€ plugin/
β”œβ”€β”€ tools/
β”œβ”€β”€ data/
β”‚   β”œβ”€β”€ nuscenes/
β”‚   β”‚   β”œβ”€β”€ maps/
β”‚   β”‚   β”œβ”€β”€ samples/
β”‚   β”‚   β”œβ”€β”€ v1.0-trainval/
β”‚   β”‚   β”œβ”€β”€ lidarseg/

3) Prepare data infos

Suppose nuScenes data is saved at data/nuscenes/.

python tools/data_converter/nusc_tracking.py

Training and Evaluation

Training

Train ViP3D using 3 historical frames and the ResNet50 backbone. It will load a pre-trained detector for weight initialization. Suppose the detector is at ckpts/detr3d_resnet50.pth. It can be downloaded from here.

bash tools/dist_train.sh plugin/vip3d/configs/vip3d_resnet50_3frame.py 8 --work-dir=work_dirs/vip3d_resnet50_3frame.1

The training stage requires ~ 17 GB GPU memory, and takes ~ 3 days for 24 epochs on 8Γ— 3090 GPUS.

Evaluation

Run evaluation using the following command:

PYTHONPATH=. python tools/test.py plugin/vip3d/configs/vip3d_resnet50_3frame.py work_dirs/vip3d_resnet50_3frame.1/epoch_24.pth --eval bbox

The checkpoint epoch_24.pth can be downloaded from here.

Expected AMOTA using ResNet50 as backbone: 0.291

Then test prediction metrics:

unzip ./nuscenes_prediction_infos_val.zip
python tools/prediction_eval.py --result_path 'work_dirs/vip3d_resnet50_3frame.1/results_nusc.json'

Expected results: minADE: 1.47, minFDE: 2.21, MR: 0.237, EPA: 0.245

License

The code and assets are under the Apache 2.0 license.

Citation

If you find our work useful for your research, please consider citing the paper:

@inproceedings{vip3d,
  title={ViP3D: End-to-end visual trajectory prediction via 3d agent queries},
  author={Gu, Junru and Hu, Chenxu and Zhang, Tianyuan and Chen, Xuanyao and Wang, Yilun and Wang, Yue and Zhao, Hang},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={5496--5506},
  year={2023}
}