Image-Compression-and-Generation-using-Variational-Autoencoders-in-Python
This project will introduce the Variational Autoencoder. I will discuss some basic theory behind this model, and move on to creating a machine learning project based on this architecture. Our data comprises 60.000 characters from a dataset of fonts. We will train a variational autoencoder that will be capable of compressing this character font data from 2500 dimensions down to 32 dimensions. This same model will be able to then reconstruct its original input with high fidelity. The true advantage of the variational autoencoder is its ability to create new outputs that come from distributions that closely follow its training data: we can output characters in brand new fonts.