• Stars
    star
    143
  • Rank 257,007 (Top 6 %)
  • Language
    Python
  • License
    MIT License
  • Created over 5 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Yet another PyTorch implementation of Tacotron 2 with reduction factor and faster training speed.

Tacotron2-PyTorch

Yet another PyTorch implementation of Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions. The project is highly based on these. I made some modification to improve speed and performance of both training and inference.

TODO

  • Add Colab demo.
  • Update README.
  • Upload pretrained models.
  • Compatible with WaveGlow and Hifi-GAN.

Requirements

  • Python >= 3.5.2
  • torch >= 1.0.0
  • numpy
  • scipy
  • pillow
  • inflect
  • librosa
  • Unidecode
  • matplotlib
  • tensorboardX

Preprocessing

Currently only support LJ Speech. You can modify hparams.py for different sampling rates. prep decides whether to preprocess all utterances before training or online preprocess. pth sepecifies the path to store preprocessed data.

Training

  1. For training Tacotron2, run the following command.
python3 train.py \
    --data_dir=<dir/to/dataset> \
    --ckpt_dir=<dir/to/models>
  1. If you have multiple GPUs, try distributed.launch.
python -m torch.distributed.launch --nproc_per_node <NUM_GPUS> train.py \
    --data_dir=<dir/to/dataset> \
    --ckpt_dir=<dir/to/models>

Note that the training batch size will become <NUM_GPUS> times larger.

  1. For training using a pretrained model, run the following command.
python3 train.py \
    --data_dir=<dir/to/dataset> \
    --ckpt_dir=<dir/to/models> \
    --ckpt_pth=<pth/to/pretrained/model>
  1. For using Tensorboard (optional), run the following command.
python3 train.py \
    --data_dir=<dir/to/dataset> \
    --ckpt_dir=<dir/to/models> \
    --log_dir=<dir/to/logs>

You can find alinment images and synthesized audio clips during training. The text to synthesize can be set in hparams.py.

Inference

  • For synthesizing wav files, run the following command.
python3 inference.py \
    --ckpt_pth=<pth/to/model> \
    --img_pth=<pth/to/save/alignment> \
    --npy_pth=<pth/to/save/mel> \
    --wav_pth=<pth/to/save/wav> \
    --text=<text/to/synthesize>

Pretrained Model

You can download pretrained models from Realeases. The hyperparameter for training is also in the directory. All the models were trained using 8 GPUs.

Vocoder

A vocoder is not implemented. But the model is compatible with WaveGlow and Hifi-GAN. Check the Colab demo for more information. Open In Colab

References

This project is highly based on the works below.