๐ฅ Avocodo: Generative Adversarial Network for Artifact-Free Vocoder
Accepted for publication in the 37th AAAI conference on artificial intelligence.
In our paper, we proposed Avocodo
.
We provide our implementation as an open source in this repository.
Abstract : Neural vocoders based on the generative adversarial neural network (GAN) have been widely used due to their fast inference speed and lightweight networks while generating high-quality speech waveforms. Since the perceptually important speech components are primarily concentrated in the low-frequency bands, most GAN-based vocoders perform multi-scale analysis that evaluates downsampled speech waveforms. This multi-scale analysis helps the generator improve speech intelligibility. However, in preliminary experiments, we discovered that the multi-scale analysis which focuses on the low-frequency bands causes unintended artifacts, e.g., aliasing and imaging artifacts, which degrade the synthesized speech waveform quality. Therefore, in this paper, we investigate the relationship between these artifacts and GAN-based vocoders and propose a GAN-based vocoder, called Avocodo, that allows the synthesis of high-fidelity speech with reduced artifacts. We introduce two kinds of discriminators to evaluate speech waveforms in various perspectives: a collaborative multi-band discriminator and a sub-band discriminator. We also utilize a pseudo quadrature mirror filter bank to obtain downsampled multi-band speech waveforms while avoiding aliasing. According to experimental resutls, Avocodo outperforms baseline GAN-based vocoders, both objectviely and subjectively, while reproducing speech with fewer artifacts.
Pre-requisites
- Install pyenv
- pyenv
- pyenv automatic installer (recommended)
- Clone this repository
- Setup virtual environment and install python requirements. Please refer pyproject.toml
pyenv install 3.8.11 pyenv virtualenv 3.8.11 avocodo pyenv local avocodo pip install wheel pip install poetry poetry install
- Download and extract the LJ Speech dataset.
- Move all wav files to LJSpeech-1.1/wavs
- Split dataset into a trainset and a validationset.
cat LJSpeech-1.1/metadata.csv | tail -n 13000 > training.txt cat LJSpeech-1.1/metadata.csv | head -n 100 > validation.txt
Training
python avocodo/train.py --config avocodo/configs/avocodo_v1.json
Inference
python avocodo/inference.py --version ${version} --checkpoint_file_id ${checkpoint_file_id}
Reference
We referred to below repositories to make this project.