• Stars
    star
    719
  • Rank 62,985 (Top 2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 6 years ago
  • Updated almost 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Twin-GAN -- Unpaired Cross-Domain Image Translation with Weight-Sharing GANs

TwinGAN -- Unsupervised Image Translation for Human Portraits

result identity_preservation search_engine search_engine

Use Pretrained Model.

We provide two pre-trained models: human to anime and human to cats.

Run the following command to translate the demo inputs.

python inference/image_translation_infer.py \
--model_path="/PATH/TO/MODEL/256/"
--image_hw=256
--input_tensor_name="sources_ph"
--output_tensor_name="custom_generated_t_style_source:0"
--input_image_path="./demo/inference_input/"
--output_image_path="./demo/inference_output/"

The input_image_path can be either one single image or a path containing images.

For more information, see the documentation on inference and eval and on the web interface.

Training

Download CelebA and the Getchu dataset by following the datasets guide. Then train your model using script from the training guide.

Blog and Technical report.

An English blog and a Chinese δΈ­ζ–‡ blog are published in early April 2018 and are available for readers with less technical background.

Network setup: network_structure

Conv layer structure: network_structure

Please refer to the technical report for details on the network structure and losses.

Extra materials:

Presentation Slides at Anime Expo 2018

Related works

Our idea of using adaptive normalization parameters for image translation is not unique. To the best of our knowledge, at least two more work have similar ideas: MUNIT and EG-UNIT. Our model is developed around the same time period as these models.

Some key differences between our model and the two mentioned are -- we find UNet to be extremely helpful in maintaining semantic correspondence across domain, and we found that sharing all convolution filter weights speeds up training while maintaining the same output quality.

Documentations

More documentations can be found under docs/

Reference

A lot of the code are adapted from online. Here is a non-exhaustive list of the repos where I borrowed code from extensively.

TF-Slim image models library

PGGAN

Anime related repos and datasets

Shameless self promotion of my AniSeg anime object detection & segmentation model.

Sketch coloring using PaintsTransfer and PaintsChainer.

Create anime portraits at Crypko and MakeGirlsMoe

The all-encompassing anime dataset Danbooru2017 by gwern.

My hand-curated sketch-colored image dataset.

Disclaimer

This personal project is developed and open sourced when I am working for Google, therefore you see Copyright 2018 Google LLC in each file. This is not an officially supported Google product. See License and Contributing for more details.