• Stars
    star
    900
  • Rank 50,742 (Top 1.0 %)
  • Language
    Lua
  • License
    MIT License
  • Created over 8 years ago
  • Updated about 6 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Generative Adversarial Text-to-Image Synthesis

###Generative Adversarial Text-to-Image Synthesis Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee

This is the code for our ICML 2016 paper on text-to-image synthesis using conditional GANs. You can use it to train and sample from text-to-image models. The code is adapted from the excellent dcgan.torch.

####Setup Instructions

You will need to install Torch, CuDNN, and the display package.

####How to train a text to image model:

  1. Download the birds and flowers and COCO caption data in Torch format.
  2. Download the birds and flowers and COCO image data.
  3. Download the text encoders for birds and flowers and COCO descriptions.
  4. Modify the CONFIG file to point to your data and text encoder paths.
  5. Run one of the training scripts, e.g. ./scripts/train_cub.sh

####How to generate samples:

  • For flowers: ./scripts/demo_flowers.sh. Add text descriptions to scripts/flowers_queries.txt.
  • For birds: ./scripts/demo_cub.sh.
  • For COCO (more general images): ./scripts/demo_coco.sh.
  • An html file will be generated with the results:

####Pretrained models:

####How to train a text encoder from scratch:

  • You may want to do this if you have your own new dataset of text descriptions.
  • For flowers and birds: follow the instructions here.
  • For MS-COCO: ./scripts/train_coco_txt.sh.

####Citation

If you find this useful, please cite our work as follows:

@inproceedings{reed2016generative,
  title={Generative Adversarial Text-to-Image Synthesis},
  author={Scott Reed and Zeynep Akata and Xinchen Yan and Lajanugen Logeswaran and Bernt Schiele and Honglak Lee},
  booktitle={Proceedings of The 33rd International Conference on Machine Learning},
  year={2016}
}