Dancing to Music
PyTorch implementation of the cross-modality generative model that synthesizes dance from music.
Paper
Hsin-Ying Lee, Xiaodong Yang, Ming-Yu Liu, Ting-Chun Wang, Yu-Ding Lu, Ming-Hsuan Yang, Jan Kautz
Dancing to Music
Neural Information Processing Systems (NeurIPS) 2019
[Paper] [YouTube] [Project] [Blog] [Supp]
Example Videos
- Beat-Matching
1st row: generated dance sequences, 2nd row: music beats, 3rd row: kinematics beats
- Multimodality
Generate various dance sequences with the same music and the same initial pose.
- Long-Term Generation
Seamlessly generate a dance sequence with arbitrary length.
- Photo-Realisitc Videos
Map generated dance sequences to photo-realistic videos.
Train Decomposition
python train_decomp.py --name Decomp
Train Composition
python train_comp.py --name Decomp --decomp_snapshot DECOMP_SNAPSHOT
Demo
python demo.py --decomp_snapshot DECOMP_SNAPSHOT --comp_snapshot COMP_SNAPSHOT --aud_path AUD_PATH --out_file OUT_FILE --out_dir OUT_DIR --thr THR
-
Flags
aud_path
: input .wav fileout_file
: location of output .mp4 fileout_dir
: directory of output framesthr
: threshold based on motion magnitudemodulate
: whether to do beat warping
-
Example
python demo.py -decomp_snapshot snapshot/Stage1.ckpt --comp_snapshot snapshot/Stage2.ckpt --aud_path demo/demo.wav --out_file demo/out.mp4 --out_dir demo/out_frame
Citation
If you find this code useful for your research, please cite our paper:
@inproceedings{lee2019dancing2music,
title={Dancing to Music},
author={Lee, Hsin-Ying and Yang, Xiaodong and Liu, Ming-Yu and Wang, Ting-Chun and Lu, Yu-Ding and Yang, Ming-Hsuan and Kautz, Jan},
booktitle={NeurIPS},
year={2019}
}
License
Copyright (C) 2020 NVIDIA Corporation. All rights reserved. This work is made available under NVIDIA Source Code License (1-Way Commercial). To view a copy of this license, visit https://nvlabs.github.io/Dancing2Music/LICENSE.txt.