• Stars
    star
    211
  • Rank 185,780 (Top 4 %)
  • Language
    Python
  • License
    MIT License
  • Created over 7 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Dance Dance Convolution dataset tools and models

Dance Dance Convolution

Dance Dance Convolution is an automatic choreography system for Dance Dance Revolution (DDR), converting raw audio into playable dances.

This repository contains the code used to produce the dataset and results in the Dance Dance Convolution paper. You can find a live demo of our system here as well as an example video.

The Fraxtil and In The Groove datasets from the paper are amalgamations of three and two StepMania "packs" respectively. Instructions for downloading these packs and building the datasets can be found below.

We are in the process of reimplementing this code (under branch master_v2), primarily to add on-the-fly feature extraction and remove the essentia dependency. However, you can get started with master if you are eager to dance.

Please email me with any issues: cdonahue [@at@] ucsd (.dot.) edu

Attribution

If you use this dataset in your research, cite via the following BibTex:

@inproceedings{donahue2017dance,
  title={Dance Dance Convolution},
  author={Donahue, Chris and Lipton, Zachary C and McAuley, Julian},
  booktitle={Proceedings of the 34th International Conference on Machine Learning},
  year={2017},
}

Requirements

Directory description

  • dataset/: code to generate the dataset from StepMania files
  • infer/: code to run demo locally
  • learn/: code to train step placement (onset) and selection (sym) models
  • scripts/: shell scripts to build the dataset (smd_*) and train (sml_*)

Running demo locally

The demo (unfortunately) requires tensorflow 0.12.1 and essentia. virtualenv recommended

  1. Install tensorflow 0.12.1
  2. Run server: ./ddc_server.sh
  3. Send server choreography requests: python ddc_client.py $ARTIST_NAME $SONG_TITLE $FILEPATH

Building dataset

  1. Make a directory named data under ~/ddc (or change scripts/var.sh to point to a different directory)
  2. Under data, make directories raw, json_raw and json_filt
  3. Under data/raw, make directories fraxtil and itg
  4. Under data/raw/fraxil, download and unzip:
  5. Under data/raw/itg, download and unzip:
  6. Navigate to scripts/
  7. Parse .sm files to JSON: ./all.sh ./smd_1_extract.sh
  8. Filter JSON files (removing mines, etc.): ./all.sh ./smd_2_filter.sh
  9. Split dataset 80/10/10: ./all.sh ./smd_3_dataset.sh
  10. Analyze dataset (e.g.): ./smd_4_analyze.sh fraxtil

Running training

  1. Navigate to scripts/
  2. Extract features: ./all.sh ./sml_onset_0_extract.sh
  3. Generate chart .pkl files (this may take a while): ./all.sh ./sml_onset_1_chart.sh
  4. Train a step placement (onset detection) model on a dataset: ./sml_onset_2_train.sh fraxtil
  5. Train a step selection (symbolic) model on a dataset: ./sml_sym_2_train.sh fraxtil
  6. Train and evaluate a Laplace-smoothed 5gram model on a dataset: ./sml_sym_2_mark.sh fraxtil 5

More Repositories

1

wavegan

WaveGAN: Learn to synthesize raw audio with generative adversarial networks
Python
1,323
star
2

nesmdb

The NES Music Database: use machine learning to compose music for the Nintendo Entertainment System!
Python
450
star
3

LakhNES

Generate 8-bit chiptunes with deep learning
Python
332
star
4

sheetsage

Transcribe music into lead sheets!
Python
288
star
5

ilm

Easily fine tune GPT-2 to fill in missing text
Python
196
star
6

music-cocreation-tutorial

Start-to-finish tutorial for interactive music co-creation in PyTorch and Tensorflow.js
Jupyter Notebook
104
star
7

sdgan

Official implementation of "Semantically Decomposing the Latent Spaces of Generative Adversarial Networks"
Python
95
star
8

opengl_spectrogram

using JUCE to create a 3D spectrogram drawn with OpenGL
C++
39
star
9

neural-loops

Make musical loops in the browser using WaveGAN, GANSynth, and MusicVAE
JavaScript
34
star
10

ddc_onset

Music onset detector from Dance Dance Convolution packaged as a lightweight PyTorch module
Python
31
star
11

midi2key_linux

Simple script to convert MIDI inputs to hotkeys on linux
Shell
13
star
12

fall23-phd-prospectives

Info for prospective PhD students for Chris Donahue's lab at CMU starting Fall 23.
12
star
13

piano-transcribe-batch

Uses Magenta's Onsets and Frames piano transcription model to transcribe a batch of solo piano recordings
JavaScript
11
star
14

piano-genie-research-demo

This is the old Piano Genie demo. For a shiny new one, go to http://piano-genie.glitch.me
TypeScript
10
star
15

gdrive-wget

Generate wget commands for Google Drive links!
JavaScript
5
star
16

ject

JUCE extended convolution techniques GUI
C++
5
star
17

wavegan_examples

Sound examples for WaveGAN
HTML
3
star
18

gpsynth

audio synthesis using genetic programming
C++
2
star
19

chrisdonahue.github.io

Jupyter Notebook
1
star
20

advoc_examples

HTML
1
star
21

sheetsage-lbd

Sound examples for Sheet Sage at ISMIR 2021 late breaking demos https://archives.ismir.net/ismir2021/latebreaking/000049.pdf
TeX
1
star
22

js_audio_examples

repository for open source JS/Web Audio API computer music tutorials
JavaScript
1
star