• Stars
    star
    442
  • Rank 98,677 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 6 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

PyTorch implementation for our paper Learning Character-Agnostic Motion for Motion Retargeting in 2D, SIGGRAPH 2019

Learning Character-Agnostic Motion for Motion Retargeting in 2D

We provide PyTorch implementation for our paper Learning Character-Agnostic Motion for Motion Retargeting in 2D, SIGGRAPH 2019.

Prerequisites

  • Linux
  • CPU or NVIDIA GPU + CUDA CuDNN
  • Python 3
  • PyTorch 0.4

Getting Started

Installation

  • Clone this repo

    git clone https://github.com/ChrisWu1997/2D-Motion-Retargeting.git
    cd 2D-Motion-Retargeting
    
  • Install dependencies

    pip install -r requirements.txt
    

    Note that the imageio package requires ffmepg and there are several options to install ffmepg. For those who are using anaconda, run conda install ffmpeg -c conda-forge is the simplest way.

Run demo examples

We provide pretrained models and several video examples, along with their OpenPose outputs. After run, the results (final joint positions + videos) will be saved in the output folder.

  • Run the full model to combine motion, skeleton, view angle from three input videos:

    python predict.py -n full --model_path ./model/pretrained_full.pth -v1 ./examples/tall_man -v2 ./examples/small_man -v3 ./examples/workout_march -h1 720 -w1 720 -h2 720 -w2 720 -h3 720 -w3 720 -o ./outputs/full-demo --max_length 120
    

    Results will be saved in ./outputs/full-demo:

  • Run the full model to do interpolation between two input videos. For example, to keep body attribute unchanged, and interpolate in motion and view axis:

    python interpolate.py --model_path ./model/pretrained_full.pth -v1 ./examples/model -v2 ./examples/tall_man -h1 720 -w1 720 -h2 720 -w2 720 -o ./outputs/interpolate-demo.mp4 --keep_attr body --form matrix --nr_sample 5 --max_length 120
    

    You will get a matrix of videos that demonstrates the interpolation results:

  • Run two encoder model to transfer motion and skeleton between two input videos:

    python predict.py -n skeleton --model_path ./model/pretrained_skeleton.pth -v1 ./examples/tall_man -v2 ./examples/small_man -h1 720 -w1 720 -h2 720 -w2 720 -o ./outputs/skeleton-demo --max_length 120
    
  • Run two encoder model to transfer motion and view angle between two input videos:

    python predict.py -n view --model_path ./model/pretrained_view.pth -v1 ./examples/tall_man -v2 ./examples/model -h1 720 -w1 720 -h2 720 -w2 720 -o ./outputs/view-demo --max_length 120
    

Use your own videos

To run our models with your own videos, you first need to use OpenPose to extract the 2D joint positions from the video, then use the resulting JSON files as described in the demo examples.

Train from scratch

Prepare Data

  • Download Mixamo Data

    For the sake of convenience, we pack the Mixamo Data that we use. To download it, see Google Drive or Baidu Drive (8jq3). After downloading, extract it into ./mixamo_data.

    NOTE: Our Mixamo dataset only covers a part of the whole collections provided by the Mixamo website. If you want to collect Mixamo Data by yourself, you can follow the our guide here. The downloaded files are of fbx format, to convert it into json/npy (joints 3d position), you can use our script dataset/fbx2joints3d.py(requires blender 2.79).

  • Preprocess the downloaded data

    python ./dataset/preprocess.py
    

Train

  • Train the full model (with three encoders) on GPU:

    python train.py -n full -g 0
    

    Further more, you can select which structure to train and which loss to use through command line arguments:

    -n : Which structure to train. 'skeleton' / 'view' for 2 encoders system to transfer skeleton/view. 'full' for full system with 3 encoders.

    โ€”disable_triplet: To disable triplet loss. By default, triplet loss is used.

    โ€”use_footvel_loss: To use foot velocity loss.

Citation

If you use this code for your research, please cite our paper:

@article{aberman2019learning,
  author = {Aberman, Kfir and Wu, Rundi and Lischinski, Dani and Chen, Baoquan and Cohen-Or, Daniel},
  title = {Learning Character-Agnostic Motion for Motion Retargeting in 2D},
  journal = {ACM Transactions on Graphics (TOG)},
  volume = {38},
  number = {4},
  pages = {75},
  year = {2019},
  publisher = {ACM}
}