• Stars
    star
    655
  • Rank 68,765 (Top 2 %)
  • Language
    Jupyter Notebook
  • License
    Other
  • Created over 4 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Reference code for "Motion-supervised Co-Part Segmentation" paper

Motion Supervised co-part Segmentation

Arxiv | YouTube video

This repository contains the source code for the paper Motion Supervised co-part Segmentation by Aliaksandr Siarohin*, Subhankar Roy*, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci and Nicu Sebe.

* - denotes equal contribution

Our method is a self-supervised deep learning method for co-part segmentation. Differently from previous works, our approach develops the idea that motion information inferred from videos can be leveraged to discover meaningful object parts. Our method can also perform video editing (aka part-swaps).

Example segmentations

Unsupervised segmentations obtained with our method on VoxCeleb:

and TaiChi dataset:

Example part-swaps

Part swaping with our method for VoxCeleb dataset. Each triplet shows source image, target video (with swap mask in the corner) and result:

Hair Swap Beard Swap
Eyes Swap Lips Swap

Installation

We support python3. To install the dependencies run:

pip install -r requirements.txt

YAML configs

There are several configuration (config/dataset_name.yaml) files one for each dataset. See config/taichi-sem-256.yaml to get description of each parameter.

Pre-trained checkpoints

Checkpoints can be found under following links: yandex-disk and google-drive.

Part-swap demo

To run a demo, download checkpoint and run the following command:

python part_swap.py  --config config/dataset_name.yaml --target_video path/to/target --source_image path/to/source --checkpoint path/to/checkpoint --swap_index 0,1

The result will be stored in result.mp4.

  • For swaping either soft or hard labels can be used (specify --hard for hard segmentation).

  • For swaping either target or source segmentation mask can be used (specify --use_source_segmentation for using source segmentation mask).

  • For the reference we also provide fully-supervised segmentation. For fully-supervised add --supervised option. And run git clone https://github.com/AliaksandrSiarohin/face-makeup.PyTorch face_parsing which is a fork of @zllrunning.

  • Also for the reference we provide First Order Motion Model based alignment, use --first_order_motion_model and the correspoinding checkpoint. This allignment can only be used along with --supervised option.

Colab Demo

We prepare a special demo for the google-colab, see: part_swap.ipynb.

Training

Model training consist in finetuning the First Order Model checkpoint (they can be downloaded from google-drive or yandex-disk). Use the following command for training:

CUDA_VISIBLE_DEVICES=0 python train.py --config config/dataset_name.yaml --device_ids 0 --checkpoint dataset-name.cpk.pth.tar

The code will create a folder in the log directory (each run will create a time-stamped new directory). Checkpoints will be saved to this folder. To check the loss values during training in see log.txt. You can also check training data reconstructions in the train-vis subfolder. By default the batch size is tunned to run on 1 Tesla-p100 gpu, you can change it in the train_params in the corresponding .yaml file.

Evaluation

We use two metrics to evaluate our model: 1) landmark regression MAE; and 2) Foreground segmentation IoU.

  1. For computing the MAE download eval_images.tar.gz from google-drive-eval and use the following command:
CUDA_VISIBLE_DEVICES=0 python evaluate.py --config config/dataset_name.yaml --device_ids 0 --root_dir path-to-root-folder-of-dataset --checkpoint_path dataset-name.cpk.pth.tar
  1. Coming soon...

Datasets

  1. Taichi. Please follow the instruction from https://github.com/AliaksandrSiarohin/video-preprocessing.

  2. VoxCeleb. Please follow the instruction from https://github.com/AliaksandrSiarohin/video-preprocessing.

Training on your own dataset

  1. Follow instructions from First Order Motion Model for preparing your dataset and train First Order Motion Model on your dataset.

  2. This repository use the same dataset format as First Order Motion Model so you can use the same data as in 1).

Additional notes

Citation:

Motion Supervised co-part Segmentation:

@article{Siarohin_2020_motion,
  title={Motion Supervised co-part Segmentation},
  author={Siarohin, Aliaksandr and Roy, Subhankar and Lathuilière, Stéphane and Tulyakov, Sergey and Ricci, Elisa and Sebe, Nicu},
  journal={arXiv preprint},
  year={2020}
}

First Order Motion Model:

@InProceedings{Siarohin_2019_NeurIPS,
  author={Siarohin, Aliaksandr and Lathuilière, Stéphane and Tulyakov, Sergey and Ricci, Elisa and Sebe, Nicu},
  title={First Order Motion Model for Image Animation},
  booktitle = {Conference on Neural Information Processing Systems (NeurIPS)},
  month = {December},
  year = {2019}
}

More Repositories

1

first-order-model

This repository contains the source code for the paper First Order Motion Model for Image Animation
Jupyter Notebook
14,508
star
2

video-preprocessing

Python
512
star
3

monkey-net

Animating Arbitrary Objects via Deep Motion Transfer
Python
472
star
4

pose-gan

Python
382
star
5

cuda-gridsample-grad2

Cuda implementation for gridsample with second derivative support
Python
78
star
6

pose-evaluation

Metrics for evaluation pose-guided image generation
Python
73
star
7

wc-gan

Whitening and Coloring transform for GANs
Python
34
star
8

mem-transfer

Deep learning models for transfering image memorability
Python
9
star
9

BAE

Bayesian Style Generation for Enhancing Perceptual Attributes
Python
6
star
10

cycle-gan

Python
5
star
11

first-order-model-website

Project website for first order model
HTML
5
star
12

aliaksandr-siarohin-website

aliaksandr-siarohin-website
CSS
4
star
13

H4IG

How important is invariance in GAN?
Python
2
star
14

AppliedRobotics

Repository for applied robotics course
HTML
2
star
15

mim-gan

Python
1
star
16

tic-tac-toe

Jupyter Notebook
1
star
17

manga-color

Coloring of manga using deep learning
Python
1
star
18

reflections

1
star
19

vitvin_genes

Vitis vinifera pathway expansion
Jupyter Notebook
1
star
20

SHAD_ALGORITHMS

C++
1
star
21

rt_simulator

Simulator of real time taskset
C++
1
star
22

GarchModels

R
1
star
23

video_da_baseline

Python
1
star
24

mem-optimization

Optimizing memorability of images using NN
Python
1
star
25

cv-sender

Send a cv to all emails
Python
1
star
26

sentence-compressor

Jupyter Notebook
1
star
27

zwitter_metrics

This is a repository for the first asssigment in bigdatashad-2016.
Python
1
star
28

SHAD_CPP

C++
1
star
29

gan

Gan implimentation in keras
Python
1
star
30

eq-inv

Python
1
star