Code for our SIGGRAPH ASIA 2023 paper "Fusing Monocular Images and Sparse IMU Signals for Real-time Human
Motion Capture". This repository contains the system implementation and evaluation. See Project Page.
conda create -n RobustCap python=3.8
conda activate RobustCap
pip install -r requirements.txt
Install pytorch cuda version from the official website.
- Download smpl files from here or the official website. Unzip it and place it at
models/
. - Download the pretrained model and data and place them at
data/
. - For AIST++ evaluation, download the no aligned files and place it at
data/dataset_work/AIST
.
We provide the evaluation code for AIST++, TotalCapture, 3DPW and 3DPW-OCC. The results maybe slightly different from the numbers reported in the paper due to the randomness of the optimization.
python evaluate.py
We provide the visualization code for AIST++. You can use view_aist function in evaluate.py to visualize the results. By indicating seq_idx and cam_idx, you can visualize the results of a specific sequence and camera. Set vis=True to visualize the overlay results (you need to download the origin AIST++ videos and put them onto config.paths.aist_raw_dir). Use body_model.view_motion to visualize the open3d results.
You can use view_aist_unity function in evaluate.py to visualize the results. By indicating seq_idx and cam_idx, you can visualize the results of a specific sequence and camera.
- Download unity assets from here.
- Create a unity 3D project and use the downloaded assets, and create a directory UserData/Motion.
- For the unity scripts, use Set Motion (set Fps to 60) and do not use Record Video.
- Run view_aist_unity and copy the generated files to UserData/Motion.
Then you can run the unity scripts to visualize the results.
- Live demo code.
@inproceedings{pan2023fusing,
title={Fusing Monocular Images and Sparse IMU Signals for Real-time Human Motion Capture},
author={Pan, Shaohua and Ma, Qi and Yi, Xinyu and Hu, Weifeng and Wang, Xiong and Zhou, Xingkang and Li, Jijunnan and Xu, Feng},
booktitle={SIGGRAPH Asia 2023 Conference Papers},
pages={1--11},
year={2023}
}