• Stars
    star
    5,408
  • Rank 7,613 (Top 0.2 %)
  • Language
    Python
  • License
    MIT License
  • Created about 8 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A TensorFlow implementation of DeepMind's WaveNet paper

A TensorFlow implementation of DeepMind's WaveNet paper

Build Status

This is a TensorFlow implementation of the WaveNet generative neural network architecture for audio generation.

The WaveNet neural network architecture directly generates a raw audio waveform, showing excellent results in text-to-speech and general audio generation (see the DeepMind blog post and paper for details).

The network models the conditional probability to generate the next sample in the audio waveform, given all previous samples and possibly additional parameters.

After an audio preprocessing step, the input waveform is quantized to a fixed integer range. The integer amplitudes are then one-hot encoded to produce a tensor of shape (num_samples, num_channels).

A convolutional layer that only accesses the current and previous inputs then reduces the channel dimension.

The core of the network is constructed as a stack of causal dilated layers, each of which is a dilated convolution (convolution with holes), which only accesses the current and past audio samples.

The outputs of all layers are combined and extended back to the original number of channels by a series of dense postprocessing layers, followed by a softmax function to transform the outputs into a categorical distribution.

The loss function is the cross-entropy between the output for each timestep and the input at the next timestep.

In this repository, the network implementation can be found in model.py.

Requirements

TensorFlow needs to be installed before running the training script. Code is tested on TensorFlow version 1.0.1 for Python 2.7 and Python 3.5.

In addition, librosa must be installed for reading and writing audio.

To install the required python packages, run

pip install -r requirements.txt

For GPU support, use

pip install -r requirements_gpu.txt

Training the network

You can use any corpus containing .wav files. We've mainly used the VCTK corpus (around 10.4GB, Alternative host) so far.

In order to train the network, execute

python train.py --data_dir=corpus

to train the network, where corpus is a directory containing .wav files. The script will recursively collect all .wav files in the directory.

You can see documentation on each of the training settings by running

python train.py --help

You can find the configuration of the model parameters in wavenet_params.json. These need to stay the same between training and generation.

Global Conditioning

Global conditioning refers to modifying the model such that the id of a set of mutually-exclusive categories is specified during training and generation of .wav file. In the case of the VCTK, this id is the integer id of the speaker, of which there are over a hundred. This allows (indeed requires) that a speaker id be specified at time of generation to select which of the speakers it should mimic. For more details see the paper or source code.

Training with Global Conditioning

The instructions above for training refer to training without global conditioning. To train with global conditioning, specify command-line arguments as follows:

python train.py --data_dir=corpus --gc_channels=32

The --gc_channels argument does two things:

  • It tells the train.py script that it should build a model that includes global conditioning.
  • It specifies the size of the embedding vector that is looked up based on the id of the speaker.

The global conditioning logic in train.py and audio_reader.py is "hard-wired" to the VCTK corpus at the moment in that it expects to be able to determine the speaker id from the pattern of file naming used in VCTK, but can be easily be modified.

Generating audio

Example output generated by @jyegerlehner based on speaker 280 from the VCTK corpus.

You can use the generate.py script to generate audio using a previously trained model.

Generating without Global Conditioning

Run

python generate.py --samples 16000 logdir/train/2017-02-13T16-45-34/model.ckpt-80000

where logdir/train/2017-02-13T16-45-34/model.ckpt-80000 needs to be a path to previously saved model (without extension). The --samples parameter specifies how many audio samples you would like to generate (16000 corresponds to 1 second by default).

The generated waveform can be played back using TensorBoard, or stored as a .wav file by using the --wav_out_path parameter:

python generate.py --wav_out_path=generated.wav --samples 16000 logdir/train/2017-02-13T16-45-34/model.ckpt-80000

Passing --save_every in addition to --wav_out_path will save the in-progress wav file every n samples.

python generate.py --wav_out_path=generated.wav --save_every 2000 --samples 16000 logdir/train/2017-02-13T16-45-34/model.ckpt-80000

Fast generation is enabled by default. It uses the implementation from the Fast Wavenet repository. You can follow the link for an explanation of how it works. This reduces the time needed to generate samples to a few minutes.

To disable fast generation:

python generate.py --samples 16000 logdir/train/2017-02-13T16-45-34/model.ckpt-80000 --fast_generation=false

Generating with Global Conditioning

Generate from a model incorporating global conditioning as follows:

python generate.py --samples 16000  --wav_out_path speaker311.wav --gc_channels=32 --gc_cardinality=377 --gc_id=311 logdir/train/2017-02-13T16-45-34/model.ckpt-80000

Where:

--gc_channels=32 specifies 32 is the size of the embedding vector, and must match what was specified when training.

--gc_cardinality=377 is required as 376 is the largest id of a speaker in the VCTK corpus. If some other corpus is used, then this number should match what is automatically determined and printed out by the train.py script at training time.

--gc_id=311 specifies the id of speaker, speaker 311, for which a sample is to be generated.

Running tests

Install the test requirements

pip install -r requirements_test.txt

Run the test suite

./ci/test.sh

Missing features

Currently there is no local conditioning on extra information which would allow context stacks or controlling what speech is generated.

Related projects

More Repositories

1

python-mle

A Python package for performing Maximum Likelihood Estimates
Python
125
star
2

dotfiles

πŸ”§ My configuration files.
Vim Script
50
star
3

phd-example

Example repo for how to organize your PhD with Github
Makefile
27
star
4

rust-ad

Automatic differentiation library for rust
Rust
27
star
5

fully-reproducible

Example of a fully reproducible (data+code β†’ PDF) research paper
Python
25
star
6

haskell-quantum

A Monad for simulating quantum processes.
Haskell
22
star
7

python-ceres

Pythonic bindings to the Ceres-Solver minimizer
C++
18
star
8

babushk.in

Source files for my blog. Based on Hakyll.
Sass
15
star
9

cpp-template

Allows you to quickly start a new C++ project
CMake
14
star
10

matplotlib-hep

⚑ An addon to matplotlib for creating high energy physics plots
Python
13
star
11

leetcode-solutions

C++
11
star
12

beamertheme-vertex

A clean, basic beamer theme
TeX
10
star
13

laminate

πŸ“¦ An efficient packed data format for protocol buffer messages
C++
9
star
14

python-textable

A Python module for generating LaTeX tables from numpy arrays.
Python
6
star
15

beamertheme-manc

A minimal beamer theme designed with legibility in mind
TeX
6
star
16

lhcb-software

An unofficial Github mirror of the LHCb software
Python
5
star
17

datapipe

A flexible data processing framework
Python
4
star
18

missing_hep

Python functions for High Energy Physics research missing from other packages
Python
3
star
19

grid-submission

A minimal, concurrent, fast grid submission script for experts
Python
3
star
20

matplotlib-tikz-workflow

Python
2
star
21

e5-lhcb

Some Python modules I've written that simplify working with the ROOT framework
Python
2
star
22

python-hepview

Tools for displaying and analyzing HepMC files.
Python
2
star
23

pizza

Python
2
star
24

clrs-solutions

My solutions to "Introduction to Algorithms"
1
star
25

github-editor

Edit files in your github repo with $EDITOR
Python
1
star
26

haskell-optimize

Algorithms for unconstrained optimization in Haskell
Haskell
1
star
27

top-quark-seminar

Summaries of papers related to top quark physics. (Assignments from my top quark seminar)
TeX
1
star
28

lhcb-b2dmumu

Search for B→Dmumu and similar decays at LHCb
Python
1
star
29

workbench

πŸ”© A minimal CLI tool for managing AWS EC2 instances
Go
1
star
30

haskell-maildir

Haskell
1
star
31

vim-snakemake

Snakemake syntax highlighting copied from the official repository. I didn't write this!
Vim Script
1
star
32

salt-cluster

Salt configuration for a scientific HPC cluster
Scheme
1
star
33

ceres-mle

A maximum likelihood estimator using the ceres-solver minimizer
C++
1
star
34

msg

βœ‰οΈ The little mail client that could
Go
1
star
35

tudo-dmc-2015

Our solution to the Data Mining Cup 2015 (http://www.data-mining-cup.de/)
Python
1
star
36

bachelor-thesis

My Bachelor thesis (in German).
C++
1
star