• Stars
    star
    226
  • Rank 176,514 (Top 4 %)
  • Language
    Jupyter Notebook
  • License
    MIT License
  • Created over 9 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Tools for using a variational auto-encoder for latent image encoding and generation.

Changelog

v1.0.x

  • Now has 3 different model classes available (VAE, GAN, VAEGAN)

  • All models have both convolution and linear mode architectures.

  • Python3 Compatibility

  • Updated to use Chainer 1.6.0

  • Will output intermediate generated images to give users the ability to inspect training progress when run in a Jupyter notebook.

fauxtograph

This package contains classes for training three different unsupervised, generative image models. Namely Variational Auto-encoders, Generative Adversarial Networks, and the newly developed combination of the two (VAE/GAN). Descriptions of the inner workings of these algorithms can be found in

  1. Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes." arXiv preprint arXiv:1312.6114 (2013).
  2. Radford, Alec et. al.; "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks" arXiv preprint arxiv:1511.06434 (2015).
  3. Boesen Lindbo Larsen, Anders et. al.; "Autoencoding Beyond Pixels Using a Learned Similarity Metric" arXiv preprint arxiv:1512.09300 (2015).

respectively.

All models take in a series of images and can be trained to perform either an encoding transform step or a generative inverse_transform step (or both). It's built on top of the Chainer framework and has an easy to use command line interface for training and generating images with a Variational Auto-encoder.

Both the module itself as well as the training script are available by installing this package through PyPI. Otherwise the module itself containing the main class which does all the heavy lifting is in fauxtograph/fauxtograph.py which has dependencies in fauxtograph/vaegan.py, while the training/generation CLI script is in fauxtograph/fauxto.py

To learn more about the command line tool functionality and to get a better sense of how one might use it, please see the blog post on the Stitch Fix tech blog, multithreaded.

##Installation

The simplest step to using the module is to install via pip:

$ pip install fauxtograph

this should additionally grab all necessary dependencies including the main backend NN framework, Chainer. However, if you plan on using CUDA to train the model with a GPU you'll need to additionally install the Chainer CUDA dependencies with

$ pip install chainer-cuda-deps

##Usage

To get started, you can either find your own image set to use or use the downloading tool to grab some of the Hubble/ESA space images, which I've found make for interesting results.

To grab the images and place them in an images folder run

$ fauxtograph download ./images

This process can take some time depending on your internet connection.

Then you can train a model and output it to disk with

$ fauxtograph train --kl_ratio 0.005 ./images ./models/model_name 

Finally, you can generate new images based on your trained model with

$ fauxtograph generate ./models/model_name_model.h5 ./models/model_name_opt.h5 ./models/model_name_meta.json ./generated_images_folder

Each command comes with a --help option to see possible optional arguments.

Tips

Using the CLI

  • In order to get the best results for generated images it'll be necessary to either have a rather large number of images (say on the order of several hundred thousand or more), or images that are all quite similar with minimal backgrounds.

  • As the model trains you should see the output of the KL Divergence average over the batches and the reconstruction loss average as well. You might wish to adjust the ratio of these two terms with the --kl_ratio option in order to get better performance should you find that the learning rate is driving one or the other terms to zero too quickly(slowly).

  • If you have an CUDA capable Nvidia GPU, use it. The model can train over 10 times faster by taking advantage of GPU processing.

  • Sometimes you will want to brighten your images when saving them, which can be done with the --image_multiplier argument.

  • If you manage to train a particularly interesting model and generate some neat images, then we'd like to see them. Use #fauxtograph if you decide to put them up on social media.

Generally

  • When training GAN and VAEGAN models, they are highly sensitive to the relative learning rate of the subnetworks. Particularly the learning rate of the generator to the discriminator. If you notice highly oscillatory behavior in your training losses it might be helpful to turn down the Adam alpha and beta1 parameters of either network (usually the discriminator) to help train them at a similar rate.

ENJOY

More Repositories

1

pyxley

Python helpers for building dashboards using Flask and React
JavaScript
2,270
star
2

hamilton

A scalable general purpose micro-framework for defining dataflows. THIS REPOSITORY HAS BEEN MOVED TO www.github.com/dagworks-inc/hamilton
Python
863
star
3

stitches

Create a Microservice in Rails with minimal ceremony
Ruby
552
star
4

nodebook

Repeatable analysis plugin for Jupyter notebook
Jupyter Notebook
261
star
5

d3-jupyter-tutorial

JavaScript
197
star
6

flotilla-os

Open source Flotilla
Go
192
star
7

immutable-struct

Create struct-like classes that don't have setters, but have an awesome constructor.
Ruby
171
star
8

algorithms-tour

How data science is woven into the fabric of Stitch Fix
HTML
169
star
9

pyxleyJS

Collection of React components for dashboards
JavaScript
155
star
10

Algorithms-Notebooks

Algorithm's team Jupyter Notebooks
Jupyter Notebook
113
star
11

diamond

Python solver for mixed-effects models
Python
98
star
12

colornamer

Given a color, return a hierarchy of names.
Python
89
star
13

resque-brain

NOT MAINTAINED [ Better resque-web that can monitor multiple Resque's in one place ]
Ruby
57
star
14

hello-scrollytelling

A bare-bones version of the scrollytelling framework used in the Algorithms Tour
HTML
52
star
15

pwwka

Interact with RabbitMQ to transmit and receive messages in an easy low-configuration way.
Ruby
51
star
16

mab

Library for multi-armed bandit selection strategies, including efficient deterministic implementations of Thompson sampling and epsilon-greedy.
Go
49
star
17

NTFLib

Sparse Beta-Divergence Tensor Factorization Library
Python
47
star
18

splits

A Python library for dealing with splittable files
Python
42
star
19

context2vec

Using Word2Vec on lists and sets
Python
34
star
20

seetd

Seating optimization
Python
23
star
21

extra_extra

Manage in-app release notes for your Rails application using Markdown
Ruby
20
star
22

tech_radar

Rails engine to manage your team's own Technology Radar
Ruby
16
star
23

MomentMixedModels

A Spark/Scala package for Moment-Based Estimation For Hierarchical Models
Scala
15
star
24

stitchfix.github.io

CSS
15
star
25

merch_calendar

Calculations around the National Retail Federation's 4-5-4 calendar
Ruby
14
star
26

s3drive

S3 backed ContentsManager for jupyter notebooks
Python
13
star
27

resqutils

Handy methods, classes, and test support for applications that use Resque
Ruby
5
star
28

go-postgres-testdb

Library for managing ephemeral test databases in Postgres
Go
3
star
29

redis_ui_rails

A Rails engine for inspecting your Redis instances
Ruby
3
star
30

fittings

Fork of mc-settings, which is a “convenient way to manage ruby application settings/configuration across multiple environments”
Ruby
3
star
31

librato

Librato client library for go
Go
1
star
32

arboreal

Tree based modeling for humans
Python
1
star