• This repository has been archived on 18/May/2022
  • Stars
    star
    128
  • Rank 281,044 (Top 6 %)
  • Language
    Python
  • Created over 4 years ago
  • Updated almost 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Code for "ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on", accepted at WACV 2021 Generation of Human Behavior Workshop.

ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on

[ Paper ] [ Project Page ]

This repository contains the code for our paper accepted at the Generation of Human Behavior Workshop at WACV 2021.

Key Contributions:

  • Scientific experiments built from the ground-up to isolate effects of each method
  • Empirically show DensePose results in better quality than CocoPose
  • Add self-attention layers
  • Find that GeLU show best results

Architecture Overview

image

How To Use This Repository

The point of entry of this repository is train.py and test.py. We have organized our code into these main folders: datasets, models, and options.

The datasets folder contains several custom defined datasets. To create your own custom tryon dataset, please refer to the Documentation IV below.

The models folder contains several models, such as the warp model and U-Net model that we used during virtual try-on work. Inside the networks sub-folder, we include several utility networks that we make use of.

The options folder contains several of the options we use at train and test time. These options allows our code to flexible, and run experiments easily.

Documentation

Results

Qualitative Comparison with FW-GAN and CP-VTON

image

Qualitative Comparison of Pose and Self-Attention

image

Qualitative Comparison of Activation Functions

image

Qualitative Comparison of Optical Flow

image

Acknowledgements and Related Code

  • This code is based in part on Sergey Wong's stellar CP-VTON repository. Thank you very much, Sergey, for your hard work.
  • Thank you Haoye Dong and his team for hosting the VUHCS competition at CVPR 2020, providing the VVT Dataset, and giving access to the FW-GAN reference code.
  • Thank you NVIDIA's team for their work on Vid2Vid and FlowNet2.
  • Credits to David Park's Self-Attention GAN implementation for attention layers reference.
  • Credits to Self-Corrective Human-Parsing for easy parsing of LIP clothing labels.
  • Credits to the detectron2 repository for Densepose annotations.