• Stars
    star
    1,852
  • Rank 25,041 (Top 0.5 %)
  • Language
    Python
  • License
    Other
  • Created about 7 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

PyTorch implementation of convolutional neural networks-based text-to-speech synthesis models

alt text

Deepvoice3_pytorch

PyPI Build Status Build status DOI

PyTorch implementation of convolutional networks-based text-to-speech synthesis models:

  1. arXiv:1710.07654: Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning.
  2. arXiv:1710.08969: Efficiently Trainable Text-to-Speech System Based on Deep Convolutional Networks with Guided Attention.

Audio samples are available at https://r9y9.github.io/deepvoice3_pytorch/.

Folks

Online TTS demo

Notebooks supposed to be executed on https://colab.research.google.com are available:

Highlights

  • Convolutional sequence-to-sequence model with attention for text-to-speech synthesis
  • Multi-speaker and single speaker versions of DeepVoice3
  • Audio samples and pre-trained models
  • Preprocessor for LJSpeech (en), JSUT (jp) and VCTK datasets, as well as carpedm20/multi-speaker-tacotron-tensorflow compatible custom dataset (in JSON format)
  • Language-dependent frontend text processor for English and Japanese

Samples

Pretrained models

NOTE: pretrained models are not compatible to master. To be updated soon.

URL Model Data Hyper paramters Git commit Steps
link DeepVoice3 LJSpeech link abf0a21 640k
link Nyanko LJSpeech builder=nyanko,preset=nyanko_ljspeech ba59dc7 585k
link Multi-speaker DeepVoice3 VCTK builder=deepvoice3_multispeaker,preset=deepvoice3_vctk 0421749 300k + 300k

To use pre-trained models, it's highly recommended that you are on the specific git commit noted above. i.e.,

git checkout ${commit_hash}

Then follow the "Synthesize from a checkpoint" section in the README of the specific git commit. Please notice that the latest development version of the repository may not work.

You could try for example:

# pretrained model (20180505_deepvoice3_checkpoint_step000640000.pth)
# hparams (20180505_deepvoice3_ljspeech.json)
git checkout 4357976
python synthesis.py --preset=20180505_deepvoice3_ljspeech.json \
  20180505_deepvoice3_checkpoint_step000640000.pth \
  sentences.txt \
  output_dir

Notes on hyper parameters

  • Default hyper parameters, used during preprocessing/training/synthesis stages, are turned for English TTS using LJSpeech dataset. You will have to change some of parameters if you want to try other datasets. See hparams.py for details.
  • builder specifies which model you want to use. deepvoice3, deepvoice3_multispeaker [1] and nyanko [2] are surpprted.
  • Hyper parameters described in DeepVoice3 paper for single speaker didn't work for LJSpeech dataset, so I changed a few things. Add dilated convolution, more channels, more layers and add guided attention loss, etc. See code for details. The changes are also applied for multi-speaker model.
  • Multiple attention layers are hard to learn. Empirically, one or two (first and last) attention layers seems enough.
  • With guided attention (see https://arxiv.org/abs/1710.08969), alignments get monotonic more quickly and reliably if we use multiple attention layers. With guided attention, I can confirm five attention layers get monotonic, though I cannot get speech quality improvements.
  • Binary divergence (described in https://arxiv.org/abs/1710.08969) seems stabilizes training particularly for deep (> 10 layers) networks.
  • Adam with step lr decay works. However, for deeper networks, I find Adam + noam's lr scheduler is more stable.

Requirements

  • Python >= 3.5
  • CUDA >= 8.0
  • PyTorch >= v1.0.0
  • nnmnkwii >= v0.0.11
  • MeCab (Japanese only)

Installation

Please install packages listed above first, and then

git clone https://github.com/r9y9/deepvoice3_pytorch && cd deepvoice3_pytorch
pip install -e ".[bin]"

Getting started

Preset parameters

There are many hyper parameters to be turned depends on what model and data you are working on. For typical datasets and models, parameters that known to work good (preset) are provided in the repository. See presets directory for details. Notice that

  1. preprocess.py
  2. train.py
  3. synthesis.py

accepts --preset=<json> optional parameter, which specifies where to load preset parameters. If you are going to use preset parameters, then you must use same --preset=<json> throughout preprocessing, training and evaluation. e.g.,

python preprocess.py --preset=presets/deepvoice3_ljspeech.json ljspeech ~/data/LJSpeech-1.0
python train.py --preset=presets/deepvoice3_ljspeech.json --data-root=./data/ljspeech

instead of

python preprocess.py ljspeech ~/data/LJSpeech-1.0
# warning! this may use different hyper parameters used at preprocessing stage
python train.py --preset=presets/deepvoice3_ljspeech.json --data-root=./data/ljspeech

0. Download dataset

1. Preprocessing

Usage:

python preprocess.py ${dataset_name} ${dataset_path} ${out_dir} --preset=<json>

Supported ${dataset_name}s are:

  • ljspeech (en, single speaker)
  • vctk (en, multi-speaker)
  • jsut (jp, single speaker)
  • nikl_m (ko, multi-speaker)
  • nikl_s (ko, single speaker)

Assuming you use preset parameters known to work good for LJSpeech dataset / DeepVoice3 and have data in ~/data/LJSpeech-1.0, then you can preprocess data by:

python preprocess.py --preset=presets/deepvoice3_ljspeech.json ljspeech ~/data/LJSpeech-1.0/ ./data/ljspeech

When this is done, you will see extracted features (mel-spectrograms and linear spectrograms) in ./data/ljspeech.

1-1. Building custom dataset. (using json_meta)

Building your own dataset, with metadata in JSON format (compatible with carpedm20/multi-speaker-tacotron-tensorflow) is currently supported. Usage:

python preprocess.py json_meta ${list-of-JSON-metadata-paths} ${out_dir} --preset=<json>

You may need to modify pre-existing preset JSON file, especially n_speakers. For english multispeaker, start with presets/deepvoice3_vctk.json.

Assuming you have dataset A (Speaker A) and dataset B (Speaker B), each described in the JSON metadata file ./datasets/datasetA/alignment.json and ./datasets/datasetB/alignment.json, then you can preprocess data by:

python preprocess.py json_meta "./datasets/datasetA/alignment.json,./datasets/datasetB/alignment.json" "./datasets/processed_A+B" --preset=(path to preset json file)

1-2. Preprocessing custom english datasets with long silence. (Based on vctk_preprocess)

Some dataset, especially automatically generated dataset may include long silence and undesirable leading/trailing noises, undermining the char-level seq2seq model. (e.g. VCTK, although this is covered in vctk_preprocess)

To deal with the problem, gentle_web_align.py will

  • Prepare phoneme alignments for all utterances
  • Cut silences during preprocessing

gentle_web_align.py uses Gentle, a kaldi based speech-text alignment tool. This accesses web-served Gentle application, aligns given sound segments with transcripts and converts the result to HTK-style label files, to be processed in preprocess.py. Gentle can be run in Linux/Mac/Windows(via Docker).

Preliminary results show that while HTK/festival/merlin-based method in vctk_preprocess/prepare_vctk_labels.py works better on VCTK, Gentle is more stable with audio clips with ambient noise. (e.g. movie excerpts)

Usage: (Assuming Gentle is running at localhost:8567 (Default when not specified))

  1. When sound file and transcript files are saved in separate folders. (e.g. sound files are at datasetA/wavs and transcripts are at datasetA/txts)
python gentle_web_align.py -w "datasetA/wavs/*.wav" -t "datasetA/txts/*.txt" --server_addr=localhost --port=8567
  1. When sound file and transcript files are saved in nested structure. (e.g. datasetB/speakerN/blahblah.wav and datasetB/speakerN/blahblah.txt)
python gentle_web_align.py --nested-directories="datasetB" --server_addr=localhost --port=8567

Once you have phoneme alignment for each utterance, you can extract features by running preprocess.py

2. Training

Usage:

python train.py --data-root=${data-root} --preset=<json> --hparams="parameters you may want to override"

Suppose you build a DeepVoice3-style model using LJSpeech dataset, then you can train your model by:

python train.py --preset=presets/deepvoice3_ljspeech.json --data-root=./data/ljspeech/

Model checkpoints (.pth) and alignments (.png) are saved in ./checkpoints directory per 10000 steps by default.

NIKL

Pleae check this in advance and follow the commands below.

python preprocess.py nikl_s ${your_nikl_root_path} data/nikl_s --preset=presets/deepvoice3_nikls.json

python train.py --data-root=./data/nikl_s --checkpoint-dir checkpoint_nikl_s --preset=presets/deepvoice3_nikls.json

4. Monitor with Tensorboard

Logs are dumped in ./log directory by default. You can monitor logs by tensorboard:

tensorboard --logdir=log

5. Synthesize from a checkpoint

Given a list of text, synthesis.py synthesize audio signals from trained model. Usage is:

python synthesis.py ${checkpoint_path} ${text_list.txt} ${output_dir} --preset=<json>

Example test_list.txt:

Generative adversarial network or variational auto-encoder.
Once upon a time there was a dear little girl who was loved by every one who looked at her, but most of all by her grandmother, and there was nothing that she would not have given to the child.
A text-to-speech synthesis system typically consists of multiple stages, such as a text analysis frontend, an acoustic model and an audio synthesis module.

Advanced usage

Multi-speaker model

VCTK and NIKL are supported dataset for building a multi-speaker model.

VCTK

Since some audio samples in VCTK have long silences that affect performance, it's recommended to do phoneme alignment and remove silences according to vctk_preprocess.

Once you have phoneme alignment for each utterance, you can extract features by:

python preprocess.py vctk ${your_vctk_root_path} ./data/vctk

Now that you have data prepared, then you can train a multi-speaker version of DeepVoice3 by:

python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk \
   --preset=presets/deepvoice3_vctk.json \
   --log-event-path=log/deepvoice3_multispeaker_vctk_preset

If you want to reuse learned embedding from other dataset, then you can do this instead by:

python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk \
   --preset=presets/deepvoice3_vctk.json \
   --log-event-path=log/deepvoice3_multispeaker_vctk_preset \
   --load-embedding=20171213_deepvoice3_checkpoint_step000210000.pth

This may improve training speed a bit.

NIKL

You will be able to obtain cleaned-up audio samples in ../nikl_preprocoess. Details are found in here.

Once NIKL corpus is ready to use from the preprocessing, you can extract features by:

python preprocess.py nikl_m ${your_nikl_root_path} data/nikl_m

Now that you have data prepared, then you can train a multi-speaker version of DeepVoice3 by:

python train.py --data-root=./data/nikl_m  --checkpoint-dir checkpoint_nikl_m \
   --preset=presets/deepvoice3_niklm.json

Speaker adaptation

If you have very limited data, then you can consider to try fine-turn pre-trained model. For example, using pre-trained model on LJSpeech, you can adapt it to data from VCTK speaker p225 (30 mins) by the following command:

python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk_adaptation \
    --preset=presets/deepvoice3_ljspeech.json \
    --log-event-path=log/deepvoice3_vctk_adaptation \
    --restore-parts="20171213_deepvoice3_checkpoint_step000210000.pth"
    --speaker-id=0

From my experience, it can get reasonable speech quality very quickly rather than training the model from scratch.

There are two important options used above:

  • --restore-parts=<N>: It specifies where to load model parameters. The differences from the option --checkpoint=<N> are 1) --restore-parts=<N> ignores all invalid parameters, while --checkpoint=<N> doesn't. 2) --restore-parts=<N> tell trainer to start from 0-step, while --checkpoint=<N> tell trainer to continue from last step. --checkpoint=<N> should be ok if you are using exactly same model and continue to train, but it would be useful if you want to customize your model architecture and take advantages of pre-trained model.
  • --speaker-id=<N>: It specifies what speaker of data is used for training. This should only be specified if you are using multi-speaker dataset. As for VCTK, speaker id is automatically assigned incrementally (0, 1, ..., 107) according to the speaker_info.txt in the dataset.

If you are training multi-speaker model, speaker adaptation will only work when n_speakers is identical.

Trouble shooting

#5 RuntimeError: main thread is not in main loop

This may happen depending on backends you have for matplotlib. Try changing backend for matplotlib and see if it works as follows:

MPLBACKEND=Qt5Agg python train.py ${args...}

In #78, engiecat reported that changing the backend of matplotlib from Tkinter(TkAgg) to PyQt5(Qt5Agg) fixed the problem.

Sponsers

Acknowledgements

Part of code was adapted from the following projects:

Banner and logo created by @jraulhernandezi (#76)

More Repositories

1

wavenet_vocoder

WaveNet vocoder
Python
2,187
star
2

gantts

PyTorch implementation of GAN-based text-to-speech synthesis and voice conversion (VC)
Jupyter Notebook
508
star
3

pysptk

A python wrapper for Speech Signal Processing Toolkit (SPTK).
Python
412
star
4

nnmnkwii

Library to build speech synthesis systems designed for easy and fast prototyping.
Python
382
star
5

tacotron_pytorch

PyTorch implementation of Tacotron speech synthesis model.
Jupyter Notebook
286
star
6

ttslearn

ttslearn: Library for Pythonで学ぢ音声合成 (Text-to-speech with Python)
Jupyter Notebook
219
star
7

pyopenjtalk

Python wrapper for OpenJTalk
Python
143
star
8

pylibfreenect2

A python interface for libfreenect2
Python
131
star
9

SPTK

A modified version of Speech Signal Processing Toolkit (SPTK)
C
85
star
10

nnmnkwii_gallery

A collection of examples demonstrating how we can build speech synthesis systems using nnmnkwii.
Jupyter Notebook
70
star
11

gossp

Speech Signal Processing for Go (not maintained)
Go
67
star
12

pyreaper

A python wrapper for REAPER
Cython
64
star
13

sinsy

A fork of sinsy: HMM/DNN-based singing voice synthesis system
C++
57
star
14

pysinsy

Python wrapper for Sinsy
Python
47
star
15

open_jtalk

A fork of open_jtalk
C++
42
star
16

jsut-lab

HTS-style full-context labels for JSUT v1.1
40
star
17

VoiceConversion.jl

[Deprecated] Statistical Voice Conversion in Julia. See the website link for new library
Julia
37
star
18

icassp2020-espnet-tts-merlin-baseline

ICASSP 2020 ESPnet-TTS: Merlin baseline system
Jupyter Notebook
35
star
19

nnet

A small collection of neural network algorithms in Go (no longer maintained)
Go
29
star
20

WORLD.jl

A lightweight julia wrapper for WORLD - a high-quality speech analysis, modification and synthesis system
Julia
27
star
21

MelGeneralizedCepstrums.jl

Mel-Generalized Cepstrum analysis
Julia
19
star
22

nlp100

Assignments for NLP 100
Python
18
star
23

hts_engine_API

A fork of hts_engine_API
C
17
star
24

bayesian-kalmanfilter

Variational Baysian Kalman Filter
Python
16
star
25

WORLD

A modified version of WORLD (original: http://ml.cs.yamanashi.ac.jp/world/english/index.html)
C++
14
star
26

Colaboratory

Colaboratory notebooks
Jupyter Notebook
14
star
27

SynthesisFilters.jl

Speech waveform synthesis filters
Julia
13
star
28

SPTK.jl

A thin Julia wrapper for Speech Signal Processing Toolkit (SPTK) API
Julia
11
star
29

robust_pca

Robust Principal Component Analysis
C++
10
star
30

ConstantQ.jl

A fast constant-q transform in Julia
Julia
9
star
31

demos

Deprecated. See https://github.com/r9y9/website
HTML
6
star
32

naive_bayes

Naive Bayes implementation with digit recognition sample
Python
6
star
33

kiritan_singing_extra

Extra resources derived from https://github.com/mmorise/kiritan_singing for DNN-based singing voice synthesis
6
star
34

stav

Statistical voice conversion written in Go for signal processing backend, Python for model training and parameter conversions
Python
6
star
35

go-world

Go port to WORLD - a high-quality speech analysis, modification and synthesis system.
Go
6
star
36

VCTK-lab

Full context labels for VCTK corpus extracted by Merlin & speech tools
6
star
37

RobustPCA.jl

Robust Principal Component Analysis in Julia
Julia
5
star
38

julia-nmf-ss-toy

NMF-based Music Source Separation Demo in Julia
Julia
5
star
39

REAPER.jl

A Julia interface for REAPER (Robust Epoch And Pitch EstimatoR)
Julia
5
star
40

dotfiles

Dotfiles
Shell
4
star
41

Libfreenect2.jl

A Julia wrapper for libfreenect2
Julia
4
star
42

blog

Deprecated. See https://github.com/r9y9/website instead.
HTML
4
star
43

r9y9.github.io

My website
HTML
4
star
44

BNMF.jl

Bayesian Non-negative Matrix Factorization
Jupyter Notebook
4
star
45

fft

A simple implementation of Fast Fourier Transform (FFT)
C
3
star
46

ita-lab

HTS-style full-context labels for ITAコーパス γƒžγƒ«γƒγƒ’γƒΌγƒ€γƒ«γƒ‡γƒΌγ‚Ώγƒ™γƒΌγ‚Ή
Python
2
star
47

sandbox

2
star
48

media-player-demo

A Media player demonstration using Qt multimedia
C++
2
star
49

docker-pytorch-apex

Docker files for Pytorch + Apex
Dockerfile
2
star
50

SpeechBase.jl

Please do not use this package. SpeechBase.jl still need to be carefully designed.
Julia
2
star
51

FHMMs.jl

proof of concept
Julia
1
star
52

commonvoice-lab

HTS style full-context labels for common voice
1
star
53

setup

Setup script for Linux system
Shell
1
star
54

mnist

Go bindings for MNIST
Go
1
star
55

HMMs.jl

Hidden Markov Models in Julia
Julia
1
star
56

demos-src

Deprecated. See https://github.com/r9y9/website
HTML
1
star
57

pcdio_test

C++
1
star
58

HTSEngineAPI.jl

A Julia wrapper for hts_engine_API
Julia
1
star
59

python-neural-net-toy-codes

Feed forward Neural Networks with XOR and MNIST examples
Python
1
star
60

css10-lab

HTS-style full-context labels for CSS 10 Ja corpus
1
star
61

svdd2024seg

Python
1
star