• Stars
    star
    265
  • Rank 154,577 (Top 4 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 2 years ago
  • Updated 7 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[ICCV-2023] Official code for work "HumanMAC: Masked Motion Completion for Human Motion Prediction".

ICCV 2023 HumanMAC

Code for "HumanMAC: Masked Motion Completion for Human Motion Prediction".

Ling-Hao Chen*1, Jiawei Zhang*2, Yewen Li3, Yiren Pang2, Xiaobo Xia4, Tongliang Liu4

1Tsinghua University, 2Xidian University, 3Nanyang Technological University, 4The University of Sydney

[Project Page] | [Preprint] | [ไธญๆ–‡ๆ–‡ๆกฃ] | [video] | [code]

Human motion prediction is a classical problem in computer vision and computer graphics, which has a wide range of practical applications. Previous effects achieve great empirical performance based on an encoding-decoding style. The methods of this style work by first encoding previous motions to latent representations and then decoding the latent representations into predicted motions. However, in practice, they are still unsatisfactory due to several issues, including complicated loss constraints, cumbersome training processes, and scarce switch of different categories of motions in prediction. In this paper, to address the above issues, we jump out of the foregoing style and propose a novel framework from a new perspective. Specifically, our framework works in a denoising diffusion style. In the training stage, we learn a motion diffusion model that generates motions from random noise. In the inference stage, with a denoising procedure, we make motion prediction conditioning on observed motions to output more continuous and controllable predictions. The proposed framework enjoys promising algorithmic properties, which only needs one loss in optimization and is trained in an end-to-end manner. Additionally, it accomplishes the switch of different categories of motions effectively, which is significant in realistic tasks, e.g., the animation task. Comprehensive experiments on benchmarks confirm the superiority of the proposed framework. The project page is available at https://lhchen.top/Human-MAC.

๐Ÿ“ข News

[2023/12/19]: HumanMAC works as a motion prediction module in Interactive Humanoid.

[2023/10/21]: Check out my latest work HumanTOMATO, the FIRST attempt to generate whole-body motions with text description.

[2023/10/17]: Check out my latest open-source project UniMoCap, a unifier for mocap-based text-motion datasets.

[2023/07/14]: HumanMAC is accepted by ICCV 2023!

[2023/03/26]: HumanMAC code released!

๐Ÿ—‚๏ธ Preparation

Data

Datasets for Human3.6M and HumanEva-I:

We adopt the data preprocessing from GSPS, which you can refer to here and download all files into the ./data directory.

Dataset for zero-shot experiments on AMASS:

We retarget skeletons in the AMASS dataset to the Human3.6M dataset. We provide a small subset retargeted AMASS motion here. The retargeted sub-dataset can be downloaded from Google Drive (Baidu Netdisk). And put it in the ./data directory. The retargeting process is detailed in ./motion-retargeting.

Final ./data directory structure is shown below:

data
โ”œโ”€โ”€ amass_retargeted.npy
โ”œโ”€โ”€ data_3d_h36m.npz
โ”œโ”€โ”€ data_3d_h36m_test.npz
โ”œโ”€โ”€ data_3d_humaneva15.npz
โ”œโ”€โ”€ data_3d_humaneva15_test.npz
โ”œโ”€โ”€ data_multi_modal
โ”‚   โ”œโ”€โ”€ data_candi_t_his25_t_pred100_skiprate20.npz
โ”‚   โ””โ”€โ”€ t_his25_1_thre0.500_t_pred100_thre0.100_filtered_dlow.npz
โ””โ”€โ”€ humaneva_multi_modal
    โ”œโ”€โ”€ data_candi_t_his15_t_pred60_skiprate15.npz
    โ””โ”€โ”€ t_his15_1_thre0.500_t_pred60_thre0.010_index_filterd.npz

Pretrained Model

To make the visualization of HumanMAC's various abilities convenient, we provide pretrained model Google Drive (Baidu Netdisk) on Human3.6M. The pretrained model need to be put in the ./checkpoints directory.

Environment Setup

sh install.sh

๐Ÿ”ง Training

For Human3.6M:

python main.py --cfg h36m --mode train

For HumanEva-I:

python main.py --cfg humaneva --mode train

After running the command, a directory named <DATASET>_<INDEX> is created in the ./results directory (<DATASET> in {'h36m', 'humaneva'}, <INDEX> is equal to the number of directories in ./results). During the training process, the gifs are stored in ./<DATASET>_<INDEX>/out, log files are stored in ./<DATASET>_<INDEX>/log, model checkpoints are stored in ./<DATASET>_<INDEX>/models, and metrics are stored in ./<DATASET>_<INDEX>/results.

๐Ÿ“ฝ Visualization of Motion Prediction

For Human3.6M:

python main.py --cfg h36m --mode pred --vis_row 3 --vis_col 10 --ckpt ./checkpoints/h36m_ckpt.pt

For HumanEva-I:

python main.py --cfg humaneva --mode pred --vis_row 3 --vis_col 10 --ckpt ./checkpoints/humaneva_ckpt.pt

vis_row and vis_col represent the number of rows and columns of the drawn gifs respectively. There are two gifs for each category of motions in the<DATASET>, each gif contains vis_row motions, and each motion has vis_col candidate predictions. Those gifs can be found at ./inference/<DATASET>_<INDEX>/out.

๐Ÿ”€ Motion Switch

Visualization of switch ability:

python main.py --mode switch --ckpt ./checkpoints/h36m_ckpt.pt

The vis_switch_num gifs will be stored in . /inference/switch_<INDEX>/out. Each gif contains 30 motions, and these motions will eventually switch to one of them.

๐Ÿ•น๏ธ Controllable Motion Prediction

Visualization of controllable motion prediction:

python main.py --mode control --ckpt ./checkpoints/h36m_ckpt.pt

7 gifs will be stored in . /inference/<CONTROL>_<INDEX>/out, each gif has vis_row motions and each motion has vis_col candidate predictions. <CONTROL> corresponds to {'right_leg', 'left_leg', 'torso', 'left_arm', 'right_arm', 'fix_lower', 'fix_upper'}.

๐ŸŽฏ Zero-shot Prediction on AMASS

Visualization of zero-shot on the AMASS dataset:

python main.py --mode zero_shot --ckpt ./checkpoints/h36m_ckpt.pt

The gifs of the zero-shot experiment will be stored in ./inference/zero_shot_<INDEX>/out, with the same number of motions set by vis_col and vis_row.

๐Ÿง Evaluation

Evaluate on Human3.6M:

python main.py --cfg h36m --mode eval --ckpt ./checkpoints/h36m_ckpt.pt

Evaluate on HumanEva-I:

python main.py --cfg humaneva --mode eval --ckpt ./checkpoints/humaneva_ckpt.pt

Note: We parallelize the process of evaluating metrics (APD, ADE, FDE, MMADE, and MMFDE) to speed up the process, so this part is strictly require GPU.

๐ŸŒน Acknowledgments

We would like to thank Mr. Yu-Kun Zhou from Xidian University, and Mr. Wenhao Yang from Nanjing University for providing significant suggestions and technical support.

Part of the code is borrowed from the DLow and GSPS repo.

๐Ÿ“š License

This code is distributed under an MIT LICENSE. Note that our code depends on other libraries and datasets which each have their own respective licenses that must also be followed.

๐Ÿค Citation

Please consider citing our paper if you find it helpful in your research:

@inproceedings{chen2023humanmac,
	title={HumanMAC: Masked Motion Completion for Human Motion Prediction},
	author={Chen, Ling-Hao and Zhang, Jiawei and Li, Yewen and Pang, Yiren and Xia, Xiaobo and Liu, Tongliang},
	journal={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
	year={2023}
}

๐ŸŒŸ Star History

Star History Chart

Contact at: thu DOT lhchen AT gmail DOT com

More Repositories

1

CSSummerCamp2022

ๅ…ณไบŽ2022ๅนดCSไฟ็ ”ๅคไปค่ฅ้€š็Ÿฅๅ…ฌๅ‘Š็š„ๆฑ‡ๆ€ปใ€‚ๆฌข่ฟŽๅคงๅฎถ็งฏๆžๅˆ†ไบซๅคไปค่ฅไฟกๆฏ๏ผŒ่ต„็“ทไธ€ไธ‹ไบ’่”็ฝ‘็ฒพ็ฅžๅผไธๅผๅ•Š๏ผŸ
1,284
star
2

UniMoCap

[Open-source Project] UniMoCap: community implementation to unify the text-motion datasets (HumanML3D, KIT-ML, and BABEL) and whole-body motion dataset (Motion-X).
Python
146
star
3

OpenTMA

OpenTMA: support text-motion alignment for HumanML3D, Motion-X, and UniMoCap
Python
30
star
4

StandwithRussian

We always stand with Russian, against NATO's eastward expansion! We united with the people of Russia and the international community!
13
star
5

HumanTOMATO

Web page for "๐Ÿ…HumanTOMATO: Text-aligned Whole-body Motion Generation".
HTML
13
star
6

hf-daily-autoemail

HuggingFace daily subscription to your email
Python
9
star
7

Human-MAC

[ICCV 2023] Web page for "HumanMAC: Masked Motion Completion for Human Motion Prediction".
HTML
6
star
8

XDU-OS-Course-Design

่ฅฟ็”ตๆ“ไฝœ็ณป็ปŸ่ฏพ่ฎพ้ฟๅ‘ๆŒ‡ๅ—
C
6
star
9

Code-Non-Stop

็ ไธๅœ้ข˜
C++
4
star
10

OpenTMA-demo

Python
4
star
11

Cloth-Match-Project

Python
3
star
12

Plotter-interpreter

ๅ‡ฝๆ•ฐ็ป˜ๅ›พ่ฏญ่จ€่งฃ้‡Šๅ™จใ€‚An Interpreter designed for a Function-Drawing-Language.
Python
3
star
13

Awesome-Animation-Tools

This package provides a bag of kinematics and visualization tools for animation.
3
star
14

CPP-Final-Homework

C++
2
star
15

Graph-data-mining-papers

initialize
2
star
16

GCN-PCC

GCN based point cloud classification
Python
2
star
17

MLDM-XDU

Machine Learning and Data Mining Tasks
Python
2
star
18

Movie-Scoring

User-Movie Link Prediction on Attribute Graph
Python
2
star
19

LinghaoChan.github.io

HTML
1
star
20

wxhomepage

ๅพฎไฟกไธป้กต
JavaScript
1
star