• Stars
    star
    391
  • Rank 110,003 (Top 3 %)
  • Language
    Python
  • Created over 4 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

This is the official PyTorch implementation of the CVPR 2020 paper "TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting".

TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting

Python Pytorch

Project Page | YouTube | Paper

This is the official PyTorch implementation of the CVPR 2020 paper "TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting".

Environment

conda install pytorch torchvision cudatoolkit=<your cuda version>
conda install pyyaml scikit-image scikit-learn opencv
pip install -r requirements.txt

Data

Mixamo

Mixamo is a synthesized 3D character animation dataset.

  1. Download mixamo data here.
  2. Extract under data/mixamo

For directions for downloading 3D Mixamo data please refer to this link.

SoloDance

SoloDance is a collection of dancing videos on youtube. We use DensePose to extract skeleton sequences from these videos for training.

  1. Download the extracted skeleton sequences here.
  2. Extract under data/solo_dance

The original videos can be downloaded here.

Preprocessing

run sh scripts/preprocess.sh to preprocess the two datasets above.

Pretrained model

Download the pretrained models here.

Inference

  1. For Skeleton Extraction, please consider using a pose estimation library such as Detectron2. We require the input skeleton sequences to be in the format of a numpy .npy file:

    • The file should contain an array with shape 15 x 2 x length.
    • The first dimension (15) corresponds the 15 body joint defined here.
    • The second dimension (2) corresponds to x and y coordinates.
    • The third dimension (length) is the temporal dimension.
  2. For Motion Retargeting Network, we provide the sample command for inference:

python infer_pair.py 
--config configs/transmomo.yaml 
--checkpoint transmomo_mixamo_36_800_24/checkpoints/autoencoder_00200000.pt # replace with actual path
--source a.npy  # replace with actual path
--target b.npy  # replace with actual path
--source_width 1280 --source_height 720 
--target_height 1920 --target_width 1080
  1. For Skeleton-to-Video Rendering, please refer to Everybody Dance Now.

Training

To train the Motion Retargeting Network, run

python train.py --config configs/transmomo.yaml

To train on the SoloDance dataest, run

python train.py --config configs/transmomo_solo_dance.yaml

Testing

For testing motion retargeting MSE, first generate the motion-retargeted motions with

python test.py
--config configs/transmomo.yaml # replace with the actual config used for training
--checkpoint transmomo_mixamo_36_800_24/checkpoints/autoencoder_00200000.pt
--out_dir transmomo_mixamo_36_800_24_results # replace actual path to output directory

And then compute MSE by

python scripts/compute_mse.py 
--in_dir transmomo_mixamo_36_800_24_results # replace with the previous output directory

Project Structure

transmomo.pytorch
β”œβ”€β”€ configs - configuration files
β”œβ”€β”€ data - place for storing data
β”œβ”€β”€ docs - documentations
β”œβ”€β”€ lib
β”‚Β Β  β”œβ”€β”€ data.py - datasets and dataLoaders
β”‚Β Β  β”œβ”€β”€ networks - encoders, decoders, discriminators, etc.
β”‚Β Β  β”œβ”€β”€ trainer.py - training pipeline
β”‚Β Β  β”œβ”€β”€ loss.py - loss functions
β”‚Β Β  β”œβ”€β”€ operation.py - operations, e.g. rotation, projection, etc.
β”‚Β Β  └── util - utility functions
β”œβ”€β”€ out - place for storing output
β”œβ”€β”€ infer_pair.py - perform motion retargeting
β”œβ”€β”€ render_interpolate.py - perform motion and body interpolation
β”œβ”€β”€ scripts - scripts for data processing and experiments
β”œβ”€β”€ test.py - test MSE
└── train.py - main entrance for training

TODOs

  • Detailed documentation

  • Add example files

  • Release in-the-wild dancing video dataset (unannotated)

  • Tool for visualizing Mixamo test error

  • Tool for converting keypoint formats

Citation

Z. Yang*, W. Zhu*, W. Wu*, C. Qian, Q. Zhou, B. Zhou, C. C. Loy. "TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting." IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. (* indicates equal contribution.)

BibTeX:

@inproceedings{transmomo2020,
  title={TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting},
  author={Yang, Zhuoqian and Zhu, Wentao and Wu, Wayne and Qian, Chen and Zhou, Qiang and Zhou, Bolei and Loy, Chen Change},
  booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020}
}

Acknowledgement

This repository is partly based on Rundi Wu's Learning Character-Agnostic Motion for Motion Retargeting in 2D and Xun Huang's MUNIT: Multimodal UNsupervised Image-to-image Translation. The skeleton-to-rendering part is based on Everybody Dance Now. We sincerely thank them for their inspiration and contribution to the community.