• Stars
    star
    710
  • Rank 63,751 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created over 3 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Sketch Your Own GAN: Customizing a GAN model with hand-drawn sketches.

Sketch Your Own GAN

Project | Paper | Youtube | Slides

Our method takes in one or a few hand-drawn sketches and customizes an off-the-shelf GAN to match the input sketch. While our new model changes an object’s shape and pose, other visual cues such as color, texture, background, are faithfully preserved after the modification.


Sheng-Yu Wang1, David Bau2, Jun-Yan Zhu1.
CMU1, MIT CSAIL2
In ICCV, 2021.

Aug 16 Update Training code, evaluation code, and dataset are released. Model weights are also updated, please re-run bash weights/download_weights.sh if you have downloaded the weights before this update.

Results

Our method can customize a pre-trained GAN to match input sketches.

Interpolation using our customized models. Latent space interpolation is smooth with our customized models.

Image 1
Interoplation
Image 2

Image editing using our customized models. Given a real image (a), we project it to the original model's latent space z using Huh et al. (b). (c) We then feed the projected z to the our standing cat model trained on sketches. (d) Finally, we showed edit the image with add fur operation using GANSpace.

Model interpolation. We can interpolate between the customized model by interpolating the W-latent space.

Model 1
Interoplation in W-latent space
Model 2

We observe similar effect by interpolating the model weights directly.

Model 1
Interoplation in the model weight space
Model 2

Failure case. Our method is not capable of generating images to match the Attneave’s cat sketch or the horse sketch by Picasso. We note that Attneave’s cat depicts a complex pose, and Picasso’s sketches are drawn with a distinctive style, both of which make our method struggle.

Getting Started

Clone our repo

git clone [email protected]:PeterWang512/GANSketching.git
cd GANSketching

Install packages

  • Install PyTorch (version >= 1.6.0) (pytorch.org)
    pip install -r requirements.txt

Download model weights

  • Run bash weights/download_weights.sh

Generate samples from a customized model

This command runs the customized model specified by ckpt, and generates samples to save_dir.

# generates samples from the "standing cat" model.
python generate.py --ckpt weights/photosketch_standing_cat_noaug.pth --save_dir output/samples_standing_cat

# generates samples from the cat face model in Figure. 1 of the paper.
python generate.py --ckpt weights/by_author_cat_aug.pth --save_dir output/samples_teaser_cat

# generates samples from the customized ffhq model.
python generate.py --ckpt weights/by_author_face0_aug.pth --save_dir output/samples_ffhq_face0 --size 1024 --batch_size 20

Latent space edits by GANSpace

Our model preserves the latent space editability of the original model. Our models can apply the same edits using the latents reported in Härkönen et.al. (GANSpace).

# add fur to the standing cats
python ganspace.py --obj cat --comp_id 27 --scalar 50 --layers 2,4 --ckpt weights/photosketch_standing_cat_noaug.pth --save_dir output/ganspace_fur_standing_cat

# close the eyes of the standing cats
python ganspace.py --obj cat --comp_id 45 --scalar 60 --layers 5,7 --ckpt weights/photosketch_standing_cat_noaug.pth --save_dir output/ganspace_eye_standing_cat

Colab

Thanks to Luke Floden for creating a colab to do the quick start above. Colab link: quickstart_colab.ipynb

Model Training

Training and evaluating on model trained on PhotoSketch inputs requires running the Precision and Recall metric. The following command pulls the submodule of the forked Precision and Recall repo.

git submodule update --init --recursive

Download Datasets and Pre-trained Models

The following scripts downloads our sketch data, our evaluation set, LSUN, and pre-trained models from StyleGAN2 and PhotoSketch.

# Download the sketches
bash data/download_sketch_data.sh

# Download evaluation set
bash data/download_eval_data.sh

# Download pretrained models from StyleGAN2 and PhotoSketch
bash pretrained/download_pretrained_models.sh

# Download LSUN cat, horse, and church dataset
bash data/download_lsun.sh

To train FFHQ models with image regularization, please download the FFHQ dataset using this link. This is the zip file of 70,000 images at 1024x1024 resolution. Unzip the files, , rename the images1024x1024 folder to ffhq and place it in ./data/image/.

Training Scripts

The example training configurations are specified using the scripts in scripts folder. Use the following commands to launch trainings.

# Train the "horse riders" model
bash scripts/train_photosketch_horse_riders.sh

# Train the cat face model in Figure. 1 of the paper.
bash scripts/train_teaser_cat.sh

# Train on a single quickdraw sketch
bash scripts/train_quickdraw_single_horse0.sh

# Train on sketches of faces (1024px)
bash scripts/train_authorsketch_ffhq0.sh

The training progress is tracked using wandb by default. To disable wandb logging, please add the --no_wandb tag to the training script.

Evaluations

Please make sure the evaluation set and model weights are downloaded before running the evaluation.

# You may have run these scripts already in the previous sections
bash weights/download_weights.sh
bash data/download_eval_data.sh

Use the following script to evaluate the models, the results will be saved in a csv file specified by the --output flag. --models_list should contain a list of tuple of model weight paths and evaluation data. Please see weights/eval_list for example.

python run_metrics.py --models_list weights/eval_list --output metric_results.csv

Acknowledgments

This repository borrows partially from SPADE, stylegan2-pytorch, PhotoSketch, GANSpace, and data-efficient-gans.

Reference

If you find this useful for your research, please cite the following work.

@inproceedings{wang2021sketch,
  title={Sketch Your Own GAN},
  author={Wang, Sheng-Yu and Bau, David and Zhu, Jun-Yan},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  year={2021}
}

Feel free to contact us with any comments or feedback.