Unsupervised Discovery of Interpretable Directions in the GAN Latent Space
Authors official implementation of the Unsupervised Discovery of Interpretable Directions in the GAN Latent Space (ICML 2020).
This code explores interpretable latent space directions of a pretrained GAN.
Our approach scheme: latent deformator A aims to produce shifts that are easy to distinguish for the reconstructor R
Here are several examples for Spectal Norm GAN (MNIST & Anime Faces), ProgGAN (CelebA-HQ) and BigGAN (ILSVRC):
Requirements
python 3.6 or later
jupyter (for visualization)
torch>=1.4
torchvision
tqdm
tensorboardX
see requirement.txt
for exact authors environment.
Training
Here is a minimal example of latent rectification run command:
python run_train.py \
--gan_type BigGAN \
--gan_weights models/pretrained/generators/BigGAN/G_ema.pth \
--deformator ortho \
--out rectification_results_dir
this script will save the latent space directions stored in LatentDeformator
module weights.
It also saves images charts with latent directions examples.
gan_type
specifies the generator model. Take into consideration model-specific parameters for StyleGAN2 (gan_resolution
, w_shift
) and BigGAN (target_class
).
Note that you can pass as an argument any parameter of Params
class defined in trainer.py
Evaluation
Run evaluation.ipynb
notebook for the discovered directions inspection.
Pre-trained Models
Run python download.py
to download all pretrained generators and latent directions.
We also add human_annotation.txt
file with annotation of some of directions.
The pretrained models are the unchanged copies from the following sources:
100_celeb_hq_network-snapshot-010403.pth
from https://github.com/ptrblck/prog_gans_pytorch_inference
G_ema.pth
from https://github.com/ajbrock/BigGAN-PyTorch and stylegan2-ffhq-config-f.pkl
https://github.com/NVlabs/stylegan2
converted with https://github.com/rosinality/stylegan2-pytorch
Results
Here are some examples of generated images manipulation by moving along discovered directions:
StyleGAN2 - FFHQ - opened eyes
BigBiGAN - ImageNet - light direction
BigGAN - ImageNet - rotation
Citation
@inproceedings{voynov2020unsupervised,
title={Unsupervised discovery of interpretable directions in the gan latent space},
author={Voynov, Andrey and Babenko, Artem},
booktitle={International Conference on Machine Learning},
pages={9786--9796},
year={2020},
organization={PMLR}
}
Credits
BigGAN code and weights are based on the authors implementation: https://github.com/ajbrock/BigGAN-PyTorch
ProgGAN code and weights are based on: https://github.com/ptrblck/prog_gans_pytorch_inference
U-net segmentation model code is based on: https://github.com/milesial/Pytorch-UNet