• Stars
    star
    114
  • Rank 308,031 (Top 7 %)
  • Language
    Python
  • Created about 3 years ago
  • Updated 5 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[CVPR 2022 Oral] ArtiBoost: Boosting Articulated 3D Hand-Object Pose Estimation via Online Exploration and Synthesis

icon

Boosting Articulated 3D Hand-Object Pose Estimation via Online Exploration and Synthesis

capture

CVPR, 2022
Lixin Yang * . Kailin Li * · Xinyu Zhan · Jun Lv · Wenqiang Xu · Jiefeng Li · Cewu Lu
\star = equal contribution

Paper PDF ArXiv PDF Youtube Video


This repo contains models, train, and test codes.

TODO

  • installation guideline
  • testing code and pretrained models
  • generating CCV-space
  • training pipeline

Installation

PyTorch PyTorch Ubuntu

Following the Installation Instruction to setup environment, assets, datasets.

Evaluation

HO3Dv2, Heatmap-based model, ArtiBoost

Download checkpoint: pretrained (artiboost_ho3dv2_clasbased_100e.pth.tar) to ./checkpoints.
Then run:

$ python train/submit_reload.py --cfg config_eval/eval_ho3dv2_clasbased_artiboost.yaml \
  --gpu_id 0 --submit_dump --filter_unseen_obj_idxs 11 --batch_size 100

This script yield the (Our Clas + Arti) result in main paper Table 2.

  • The object's MPCPE socre is stored in exp/submit_{cfg}_{time}/evaluations/.
  • The HO3Dv2 Codalab submission file will be dumped at: ./exp/submit_{cfg}_{time}/{cfg}_SUBMIT.zip.
    Upload it to the HO3Dv2 Codalab server and wait for the evaluation to finish.

You can also visualize the prediction as the images below:

capture

First, you need install extra packages for rendering. Use pip to sequentially install:

vtk==9.0.1 PyQt5==5.15.4 PyQt5-Qt5==5.15.2 PyQt5-sip==12.8.1 mayavi==4.7.2

Second, you need to connect a display window (could be a display monitor, TeamViewer, or VNC server) that supports Qt platform plugin "xcb".
Inside the display window, start a new terminal session and append: --postprocess_fit_mesh and --postprocess_draw at the end of the shell command, e.g.

# HO3Dv2, Heatmap-based model, ArtiBoost
$ python train/submit_reload.py --cfg config_eval/eval_ho3dv2_clasbased_artiboost.yaml \
  --gpu_id 0 --submit_dump --filter_unseen_obj_idxs 11 --batch_size 100 \
  --postprocess_fit_mesh --postprocess_draw

The rendered qualitative results are stored at exp/submit_{cfg}_{time}/rendered_image/

HO3Dv2, Regression-based model, ArtiBoost.

pretrained (artiboost_ho3dv2_regbased_100e.pth.tar)

$ python train/submit_reload.py --cfg config_eval/eval_ho3dv2_regbased_artiboost.yaml \
  --gpu_id 0 --submit_dump --filter_unseen_obj_idxs 11

This script yield the (Our Reg + Arti) result in main paper Table 2.

HO3Dv3, Heatmap-based model, ArtiBoost

pretrained (artiboost_ho3dv3_clasbased_200e.pth.tar)

$ python train/submit_reload.py --cfg config_eval/eval_ho3dv3_clasbased_artiboost.yaml \
  --gpu_id 0 --submit_dump --filter_unseen_obj_idxs 11

This script yield the (Our Clas + Arti) result in main paper Table 5.
Upload HO3Dv3 Codalab submission file to the HO3Dv3 codalab server and wait for the evaluation to finish.

HO3Dv3, Heatmap-based, Object symmetry model, ArtiBoost

pretrained (artiboost_ho3dv3_clasbased_sym_200e.pth.tar)

$ python train/submit_reload.py --cfg config_eval/eval_ho3dv3_clasbased_sym_artiboost.yaml \
  --gpu_id 0 --submit_dump --filter_unseen_obj_idxs 11

This script yield the (Ours Clas sym + Arti) result in main paper Table 5.

DexYCB, Heatmap-based, Object symmetry model, ArtiBoost

pretrained (artiboost_dexycb_clasbased_sym_100e.pth.tar)

$ python train/submit_reload.py --cfg config_eval/eval_dexycb_clasbased_sym_artiboost.yaml --gpu_id 0

This script yield the (Ours Clas sym + Arti) result in main paper Table 4.

Generate CCV

Training Pipeline

Acknowledge & Citation

@inproceedings{yang2021ArtiBoost,
    title={{ArtiBoost}: Boosting Articulated 3D Hand-Object Pose Estimation via Online Exploration and Synthesis},
    author={Yang, Lixin and Li, Kailin and Zhan, Xinyu and Lv, Jun and Xu, Wenqiang and Li, Jiefeng and Lu, Cewu},
    booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2022}
}