TAO-Amodal
Official Repository of Tracking Any Object Amodally.
๐ Project Page | ๐ Paper Link | โ๏ธ Citations
๐ Leave a โญ to keep track of our updates.
Table of Contents
๐ Get Started
Clone the repository
git clone https://github.com/WesleyHsieh0806/TAO-Amodal.git
Setup environment
conda create --name TAO-Amodal python=3.9 -y
conda activate TAO-Amodal
bash environment_setup.sh
๐ Prepare Dataset
- Download our dataset following the instructions here.
- The directory should have the following structure:
TAO-Amodal โโโ frames โ โโโ train โ โโโ ArgoVerse โ โโโ BDD โ โโโ Charades โ โโโ HACS โ โโโ LaSOT โ โโโ YFCC100M โโโ amodal_annotations โ โโโ train/validation/test.json โ โโโ train_lvis_v1.json โ โโโ validation_lvis_v1.json โโโ example_output โ โโโ prediction.json โโโ BURST_annotations โ โโโ train โ โโโ train_visibility.json โ ...
Explore more examples from our dataset here.
๐งโ๐จ Visualization
Visualize our dataset and tracker predictions to get a better understanding of amodal tracking. Instructions could be found here.
๐ Training and Inference
We provide the training and inference code of the proposed Amodal Expander.
The inference code generates a
lvis_instances_results.json
, which could be used to obtain the evaluation results as introduced in the next section.
๐ Evaluation
- Output tracker predictions as json. The predictions should be structured as:
[{
"image_id" : int,
"category_id" : int,
"bbox" : [x,y,width,height],
"score" : float,
"track_id": int,
"video_id": int
}]
We also provided an example output prediction json here. Refer to this file to check the correct format.
- Evaluate on TAO-Amodal
cd tools
python eval_on_tao_amodal.py --track_result /path/to/prediction.json \
--output_log /path/to/output.log \
--annotation /path/to/validation_lvis_v1.json
Annotation JSON is provided in our dataset. Evaluation results will be written in your console and saved in
--output_log
.
Citations
@misc{hsieh2023tracking,
title={Tracking Any Object Amodally},
author={Cheng-Yen Hsieh and Tarasha Khurana and Achal Dave and Deva Ramanan},
year={2023},
eprint={2312.12433},
archivePrefix={arXiv},
primaryClass={cs.CV}
}