DanceNet - Dance generator using Variational Autoencoder, LSTM and Mixture Density Network. (Keras)
Main components:
- Variational autoencoder
- LSTM + Mixture Density Layer
Requirements:
-
Python version = 3.5.2
Packages
- keras==2.2.0
- sklearn==0.19.1
- numpy==1.14.3
- opencv-python==3.4.1
Dataset
https://www.youtube.com/watch?v=NdSqAAT28v0 This is the video used for training.
How to run locally
- Download the trained weights from here. and extract it to the dancenet dir.
- Run dancegen.ipynb
How to run in your browser
- Click the button above to open this code in a FloydHub workspace (the trained weights dataset will be automatically attached to the environment)
- Run dancegen.ipynb
Training from scratch
- fill dance sequence images labeled as
1.jpg
,2.jpg
... inimgs/
folder - run
model.py
- run
gen_lv.py
to encode images - run
video_from_lv.py
to test decoded video - run jupyter notebook
dancegen.ipynb
to train dancenet and generate new video.
References
- Does my AI have better dance moves than me? by Cary Huang
- Generative Choreography using Deep Learning (Chor-RNN)
- Building autoencoders in keras by Francois Chollet
- Time Series Prediction with LSTM Recurrent Neural Networks in Python with Keras
- Mixture Density Networks by David Ha
- Mixture Density Layer for Keras by Charles Martin