DatasetGAN
This is the official code and data release for:
DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort
Yuxuan Zhang*, Huan Ling*, Jun Gao, Kangxue Yin, Jean-Francois Lafleche, Adela Barriuso, Antonio Torralba, Sanja Fidler
* authors contributed equally
CVPR'21, Oral [paper] [supplementary] [Project Page]
News
-
Benchmark Challenge - A benchmark with diversed testing images is coming soon -- stay tuned!
-
Generated dataset for downstream tasks is coming soon -- stay tuned!
-
New version released under [EditGAN]..
β Training on real images with encoder.
β Support Stylegan2 backbone.
β Support high precision semantic editing.
License
For any code dependency related to Stylegan, the license is under the Creative Commons BY-NC 4.0 license by NVIDIA Corporation. To view a copy of this license, visit LICENSE.
The code of DatasetGAN is released under the MIT license. See LICENSE for additional details.
The dataset of DatasetGAN is released under the Creative Commons BY-NC 4.0 license by NVIDIA Corporation. You can use, redistribute, and adapt the material for non-commercial purposes, as long as you give appropriate credit by citing our paper and indicating any changes that you've made.
Requirements
- Python 3.6 is supported.
- Pytorch 1.4.0.
- This code is tested with CUDA 10.1 toolkit and CuDNN 7.5.
- All results in our paper are based on Nvidia Tesla V100 GPUs with 32GB memory.
- Please check the python package requirement from
requirements.txt
, and install using
pip install -r requirements.txt
-
Download Dataset from google drive and put it in the folder of ./datasetGAN/dataset_release. The cache npy files are explained in Section Create your own model. Please be aware that the dataset of DatasetGAN is released under the Creative Commons BY-NC 4.0 license by NVIDIA Corporation.
-
Download pretrained checkpoint from Stylegan and convert the tensorflow checkpoint to pytorch. We also release the pytorch checkpoint for your convenience. Put checkpoints in the folder of ./checkpoint/stylegan_pretrain. Please be aware that the any code dependency and checkpoint related to Stylegan, the license is under the Creative Commons BY-NC 4.0 license by NVIDIA Corporation.
Training
To reproduce paper DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort:
cd datasetGAN
- Run Step1: Interpreter training.
- Run Step2: Sampling to generate massive annotation-image dataset.
- Run Step3: Train Downstream Task.
1. Interpreter Training
python train_interpreter.py --exp experiments/<exp_name>.json
Note: Training time for 16 images is around one hour. 160G RAM is required to run 16 images training. One can cache the data returned from prepare_data function to disk but it will increase trianing time due to I/O burden.
Example of annotation schema for Face class. Please refer to paper for other classes.
Download Checkpoints
2. Run GAN Sampling
python train_interpreter.py \
--generate_data True --exp experiments/<exp_name>.json \
--resume [path-to-trained-interpreter in step3] \
--num_sample [num-samples]
To run sampling processes in parallel
sh datasetGAN/script/generate_face_dataset.sh
Example of sampling images and annotation:
3. Train Downstream Task
python train_deeplab.py \
--data_path [path-to-generated-dataset in step4] \
--exp experiments/<exp_name>.json
Inference
python test_deeplab_cross_validation.py --exp experiments/face_34.json\
--resume [path-to-downstream task checkpoint] --cross_validate True
June 21 Update:
For training interpreter, we change the upsampling method from nearnest upsampling to bilinar upsampling in line and update results in Table 1. The table reports mIOU.
Sep 12 Update:
Thanks for @greatwallet. According to issue, we fixed a uncertainty score calculation bug. The Ours-Fix row shows the results.
Create your own model
To create your own model, please follow the following steps:
-
Train your own stylegan model using official Stylegan code and convert the tensorflow checkpoint to pytorch. Specific stylegan path in
datasetGAN/experiments/customized.json
-
Run
python datasetGAN/make_training_data.py --exp datasetGAN/experiments/customized.json --sv_path ./new_data
.This function will generate sample images and dump two numpy files in sv_path.
a.
avg_latent_stylegan1.npy
(Dim: 18 *512) is used for truncation. It's average of w latent space. We sample 8000 from z space and calculate average of w.b.
latent_stylegan1.npy
(Dim: number_data *512) is cache z latent codes to retrive corresponding training images. -
Annotate image as you like, follow the file format in google drive and put annotation in the folder of ./datasetGAN/dataset_release.
Citations
Please ue the following citation if you use our data or code:
@inproceedings{zhang2021datasetgan,
title={Datasetgan: Efficient labeled data factory with minimal human effort},
author={Zhang, Yuxuan and Ling, Huan and Gao, Jun and Yin, Kangxue and Lafleche, Jean-Francois and Barriuso, Adela and Torralba, Antonio and Fidler, Sanja},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={10145--10155},
year={2021}
}