Note:
Code Tutorial + Implementation Tutorial
VQGAN
Vector Quantized Generative Adversarial Networks (VQGAN) is a generative model for image modeling. It was introduced in Taming Transformers for High-Resolution Image Synthesis. The concept is build upon two stages. The first stage learns in an autoencoder-like fashion by encoding images into a low-dimensional latent space, then applying vector quantization by making use of a codebook. Afterwards, the quantized latent vectors are projected back to the original image space by using a decoder. Encoder and Decoder are fully convolutional. The second stage is learning a transformer for the latent space. Over the course of training it learns which codebook vectors go along together and which not. This can then be used in an autoregressive fashion to generate before unseen images from the data distribution.
Results for First Stage (Reconstruction):
1. Epoch:
50. Epoch:
Results for Second Stage (Generating new Images):
Original Left | Reconstruction Middle Left | Completion Middle Right | New Image Right
1. Epoch:
100. Epoch:
Note: Let the model train for even longer to get better results.
Train VQGAN on your own data:
Training First Stage
- (optional) Configure Hyperparameters in
training_vqgan.py
- Set path to dataset in
training_vqgan.py
python training_vqgan.py
Training Second Stage
- (optional) Configure Hyperparameters in
training_transformer.py
- Set path to dataset in
training_transformer.py
python training_transformer.py
Citation
@misc{esser2021taming,
title={Taming Transformers for High-Resolution Image Synthesis},
author={Patrick Esser and Robin Rombach and Björn Ommer},
year={2021},
eprint={2012.09841},
archivePrefix={arXiv},
primaryClass={cs.CV}
}