• Stars
    star
    508
  • Rank 86,941 (Top 2 %)
  • Language
    Jupyter Notebook
  • License
    Other
  • Created about 7 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

PyTorch implementation of GAN-based text-to-speech synthesis and voice conversion (VC)

GAN TTS

Build Status PyPI DOI

PyTorch implementation of Generative adversarial Networks (GAN) based text-to-speech (TTS) and voice conversion (VC).

  1. Saito, Yuki, Shinnosuke Takamichi, and Hiroshi Saruwatari. "Statistical Parametric Speech Synthesis Incorporating Generative Adversarial Networks." IEEE/ACM Transactions on Audio, Speech, and Language Processing (2017).
  2. Shan Yang, Lei Xie, Xiao Chen, Xiaoyan Lou, Xuan Zhu, Dongyan Huang, Haizhou Li, " Statistical Parametric Speech Synthesis Using Generative Adversarial Networks Under A Multi-task Learning Framework", arXiv:1707.01670, Jul 2017.

Generated audio samples

Audio samples are available in the Jupyter notebooks at the link below:

Notes on hyper parameters

  • adversarial_streams, which represents streams (mgc, lf0, vuv, bap) to be used to compute adversarial loss, is a very speech quality sensitive parameter. Computing adversarial loss on mgc features (except for first few dimensions) seems to be working good.
  • If mask_nth_mgc_for_adv_loss > 0, first mask_nth_mgc_for_adv_loss dimension for mgc will be ignored for computing adversarial loss. As described in saito2017asja, I confirmed that using 0-th (and 1-th) mgc for computing adversarial loss affects speech quality. From my experience, mask_nth_mgc_for_adv_loss = 1 for mgc order 25, mask_nth_mgc_for_adv_loss = 2 for mgc order 59 are working to me.
  • F0 extracted by WORLD will be spline interpolated. Set f0_interpolation_kind to "slinear" if you want frist-order spline interpolation, which is same as Merlin's default.
  • Set use_harvest to True if you want to use Harvest F0 estimation algorithm. If False, Dio and StoneMask are used to estimate/refine F0.
  • If you see cuda runtime error (2) : out of memory, try smaller batch size. #3

Notes on [2]

Though I haven't got improvements over Saito's approach [1] yet, but the GAN-based models described in [2] should be achieved by the following configurations:

  • Set generator_add_noise to True. This will enable generator to use Gaussian noise as input. Linguistic features are concatenated with the noise vector.
  • Set discriminator_linguistic_condition to True. The discriminator uses linguistic features as condition.

Requirements

Installation

Please install PyTorch, TensorFlow and SRU (if needed) first. Once you have those, then

git clone --recursive https://github.com/r9y9/gantts && cd gantts
pip install -e ".[train]"

should install all other dependencies.

Repository structure

  • gantts/: Network definitions, utilities for working on sequence-loss optimization.
  • prepare_features_vc.py: Acoustic feature extraction script for voice conversion.
  • prepare_features_tts.py: Linguistic/duration/acoustic feature extraction script for TTS.
  • train.py: GAN-based training script. This is written to be generic so that can be used for training voice conversion models as well as text-to-speech models (duration/acoustic).
  • train_gan.sh: Adversarial training wrapper script for train.py.
  • hparams.py: Hyper parameters for VC and TTS experiments.
  • evaluation_vc.py: Evaluation script for VC.
  • evaluation_tts.py: Evaluation script for TTS.

Feature extraction scripts are written for CMU ARCTIC dataset, but can be easily adapted for other datasets.

Run demos

Voice conversion (en)

vc_demo.sh is a clb to clt voice conversion demo script. Before running the script, please download wav files for clb and slt from CMU ARCTIC and check that you have all data in a directory as follows:

> tree ~/data/cmu_arctic/ -d -L 1
/home/ryuichi/data/cmu_arctic/
├── cmu_us_awb_arctic
├── cmu_us_bdl_arctic
├── cmu_us_clb_arctic
├── cmu_us_jmk_arctic
├── cmu_us_ksp_arctic
├── cmu_us_rms_arctic
└── cmu_us_slt_arctic

Once you have downloaded datasets, then:

./vc_demo.sh ${experimental_id} ${your_cmu_arctic_data_root}

e.g.,

 ./vc_demo.sh vc_gan_test ~/data/cmu_arctic/

Model checkpoints will be saved at ./checkpoints/${experimental_id} and audio samples are saved at ./generated/${experimental_id}.

Text-to-speech synthesis (en)

tts_demo.sh is a self-contained TTS demo script. The usage is:

./tts_demo.sh ${experimental_id}

This will download slt_arctic_full_data used in Merlin's demo, perform feature extraction, train models and synthesize audio samples for eval/test set. ${experimenta_id} can be arbitrary string, for example,

./tts_demo.sh tts_test

Model checkpoints will be saved at ./checkpoints/${experimental_id} and audio samples are saved at ./generated/${experimental_id}.

Hyper paramters

See hparams.py.

Monitoring training progress

tensorboard --logdir=log

References

Notice

The repository doesn't try to reproduce same results reported in their papers because 1) data is not publically available and 2). hyper parameters are highly depends on data. Instead, I tried same ideas on different data with different hyper parameters.

More Repositories

1

wavenet_vocoder

WaveNet vocoder
Python
2,187
star
2

deepvoice3_pytorch

PyTorch implementation of convolutional neural networks-based text-to-speech synthesis models
Python
1,852
star
3

pysptk

A python wrapper for Speech Signal Processing Toolkit (SPTK).
Python
412
star
4

nnmnkwii

Library to build speech synthesis systems designed for easy and fast prototyping.
Python
382
star
5

tacotron_pytorch

PyTorch implementation of Tacotron speech synthesis model.
Jupyter Notebook
286
star
6

ttslearn

ttslearn: Library for Pythonで学ぶ音声合成 (Text-to-speech with Python)
Jupyter Notebook
219
star
7

pyopenjtalk

Python wrapper for OpenJTalk
Python
143
star
8

pylibfreenect2

A python interface for libfreenect2
Python
131
star
9

SPTK

A modified version of Speech Signal Processing Toolkit (SPTK)
C
85
star
10

nnmnkwii_gallery

A collection of examples demonstrating how we can build speech synthesis systems using nnmnkwii.
Jupyter Notebook
70
star
11

gossp

Speech Signal Processing for Go (not maintained)
Go
67
star
12

pyreaper

A python wrapper for REAPER
Cython
64
star
13

sinsy

A fork of sinsy: HMM/DNN-based singing voice synthesis system
C++
57
star
14

pysinsy

Python wrapper for Sinsy
Python
47
star
15

open_jtalk

A fork of open_jtalk
C++
42
star
16

jsut-lab

HTS-style full-context labels for JSUT v1.1
40
star
17

VoiceConversion.jl

[Deprecated] Statistical Voice Conversion in Julia. See the website link for new library
Julia
37
star
18

icassp2020-espnet-tts-merlin-baseline

ICASSP 2020 ESPnet-TTS: Merlin baseline system
Jupyter Notebook
35
star
19

nnet

A small collection of neural network algorithms in Go (no longer maintained)
Go
29
star
20

WORLD.jl

A lightweight julia wrapper for WORLD - a high-quality speech analysis, modification and synthesis system
Julia
27
star
21

MelGeneralizedCepstrums.jl

Mel-Generalized Cepstrum analysis
Julia
19
star
22

nlp100

Assignments for NLP 100
Python
18
star
23

hts_engine_API

A fork of hts_engine_API
C
17
star
24

bayesian-kalmanfilter

Variational Baysian Kalman Filter
Python
16
star
25

WORLD

A modified version of WORLD (original: http://ml.cs.yamanashi.ac.jp/world/english/index.html)
C++
14
star
26

Colaboratory

Colaboratory notebooks
Jupyter Notebook
14
star
27

SynthesisFilters.jl

Speech waveform synthesis filters
Julia
13
star
28

SPTK.jl

A thin Julia wrapper for Speech Signal Processing Toolkit (SPTK) API
Julia
11
star
29

robust_pca

Robust Principal Component Analysis
C++
10
star
30

ConstantQ.jl

A fast constant-q transform in Julia
Julia
9
star
31

demos

Deprecated. See https://github.com/r9y9/website
HTML
6
star
32

naive_bayes

Naive Bayes implementation with digit recognition sample
Python
6
star
33

kiritan_singing_extra

Extra resources derived from https://github.com/mmorise/kiritan_singing for DNN-based singing voice synthesis
6
star
34

stav

Statistical voice conversion written in Go for signal processing backend, Python for model training and parameter conversions
Python
6
star
35

go-world

Go port to WORLD - a high-quality speech analysis, modification and synthesis system.
Go
6
star
36

VCTK-lab

Full context labels for VCTK corpus extracted by Merlin & speech tools
6
star
37

RobustPCA.jl

Robust Principal Component Analysis in Julia
Julia
5
star
38

julia-nmf-ss-toy

NMF-based Music Source Separation Demo in Julia
Julia
5
star
39

REAPER.jl

A Julia interface for REAPER (Robust Epoch And Pitch EstimatoR)
Julia
5
star
40

dotfiles

Dotfiles
Shell
4
star
41

Libfreenect2.jl

A Julia wrapper for libfreenect2
Julia
4
star
42

blog

Deprecated. See https://github.com/r9y9/website instead.
HTML
4
star
43

r9y9.github.io

My website
HTML
4
star
44

BNMF.jl

Bayesian Non-negative Matrix Factorization
Jupyter Notebook
4
star
45

fft

A simple implementation of Fast Fourier Transform (FFT)
C
3
star
46

ita-lab

HTS-style full-context labels for ITAコーパス マルチモーダルデータベース
Python
2
star
47

sandbox

2
star
48

media-player-demo

A Media player demonstration using Qt multimedia
C++
2
star
49

docker-pytorch-apex

Docker files for Pytorch + Apex
Dockerfile
2
star
50

SpeechBase.jl

Please do not use this package. SpeechBase.jl still need to be carefully designed.
Julia
2
star
51

FHMMs.jl

proof of concept
Julia
1
star
52

commonvoice-lab

HTS style full-context labels for common voice
1
star
53

setup

Setup script for Linux system
Shell
1
star
54

mnist

Go bindings for MNIST
Go
1
star
55

HMMs.jl

Hidden Markov Models in Julia
Julia
1
star
56

demos-src

Deprecated. See https://github.com/r9y9/website
HTML
1
star
57

pcdio_test

C++
1
star
58

HTSEngineAPI.jl

A Julia wrapper for hts_engine_API
Julia
1
star
59

python-neural-net-toy-codes

Feed forward Neural Networks with XOR and MNIST examples
Python
1
star
60

css10-lab

HTS-style full-context labels for CSS 10 Ja corpus
1
star
61

svdd2024seg

Python
1
star