• Stars
    star
    490
  • Rank 89,811 (Top 2 %)
  • Language
    Python
  • Created over 7 years ago
  • Updated over 7 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Tensorflow implementation of variational auto-encoder for MNIST

Variational Auto-Encoder for MNIST

An implementation of variational auto-encoder (VAE) for MNIST descripbed in the paper:

Results

Reproduce

Well trained VAE must be able to reproduce input image.
Figure 5 in the paper shows reproduce performance of learned generative models for different dimensionalities.
The following results can be reproduced with command:

python run_main.py --dim_z <each value> --num_epochs 60
Input image 2-D latent space 5-D latent space 10-D latent space 20-D latent space

Denoising

When training, salt & pepper noise is added to input image, so that VAE can reduce noise and restore original input image.
The following results can be reproduced with command:

python run_main.py --dim_z 20 --add_noise True --num_epochs 40
Original input image Input image with noise Restored image via VAE

Learned MNIST manifold

Visualizations of learned data manifold for generative models with 2-dim. latent space are given in Figure. 4 in the paper.
The following results can be reproduced with command:

python run_main.py --dim_z 2 --num_epochs 60 --PMLR True
Learned MNIST manifold Distribution of labeled data

Usage

Prerequisites

  1. Tensorflow
  2. Python packages : numpy, scipy, PIL(or Pillow), matplotlib

Command

python run_main.py --dim_z <latent vector dimension>

Example: python run_main.py --dim_z 20

Arguments

Required :

  • --dim_z: Dimension of latent vector. Default: 20

Optional :

  • --results_path: File path of output images. Default: results
  • --add_noise: Boolean for adding salt & pepper noise to input image. Default: False
  • --n_hidden: Number of hidden units in MLP. Default: 500
  • --learn_rate: Learning rate for Adam optimizer. Default: 1e-3
  • --num_epochs: The number of epochs to run. Default: 20
  • --batch_size: Batch size. Default: 128
  • --PRR: Boolean for plot-reproduce-result. Default: True
  • --PRR_n_img_x: Number of images along x-axis. Default: 10
  • --PRR_n_img_y: Number of images along y-axis. Default: 10
  • --PRR_resize_factor: Resize factor for each displayed image. Default: 1.0
  • --PMLR: Boolean for plot-manifold-learning-result. Default: False
  • --PMLR_n_img_x: Number of images along x-axis. Default: 20
  • --PMLR_n_img_y: Number of images along y-axis. Default: 20
  • --PMLR_resize_factor: Resize factor for each displayed image. Default: 1.0
  • --PMLR_n_samples: Number of samples in order to get distribution of labeled data. Default: 5000

References

The implementation is based on the projects:
[1] https://github.com/oduerr/dl_tutorial/tree/master/tensorflow/vae
[2] https://github.com/fastforwardlabs/vae-tf/tree/master
[3] https://github.com/kvfrans/variational-autoencoder
[4] https://github.com/altosaar/vae

Acknowledgements

This implementation has been tested with Tensorflow r0.12 on Windows 10.

More Repositories

1

tensorflow-generative-model-collections

Collection of generative models in Tensorflow
Python
3,910
star
2

awesome-deep-text-detection-recognition

A curated list of resources for text detection/recognition (optical character recognition ) with deep learning methods.
2,505
star
3

awesome-deep-vision-web-demo

A curated list of awesome deep vision web demo
341
star
4

tensorflow-style-transfer

A simple, concise tensorflow implementation of style transfer (neural style)
Python
296
star
5

tensorflow-fast-style-transfer

A simple, concise tensorflow implementation of fast style transfer
Python
245
star
6

tensorflow-mnist-cnn

MNIST classification using Convolutional NeuralNetwork. Various techniques such as data augmentation, dropout, batchnormalization, etc are implemented.
Python
196
star
7

tensorflow-mnist-CVAE

Tensorflow implementation of conditional variational auto-encoder for MNIST
Python
150
star
8

tensorflow-mnist-AAE

Tensorflow implementation of adversarial auto-encoder for MNIST
Python
86
star
9

how-far-can-we-go-with-MNIST

A collection of codes for 'how far can we go with MNIST' challenge
61
star
10

tensorflow-mnist-MLP-batch_normalization-weight_initializers

MNIST classification using Multi-Layer Perceptron (MLP) with 2 hidden layers. Some weight-initializers and batch-normalization are implemented.
Python
54
star
11

numpy-neuralnet-exercise

Implementation of key concepts of neuralnetwork via numpy
Python
48
star
12

tensorflow-GAN-1d-gaussian-ex

Tensorflow implementation of Generative Adversarial Network for approximating a 1D Gaussian distribution
Python
25
star