• Stars
    star
    448
  • Rank 96,876 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created about 2 years ago
  • Updated about 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Score-based Generative Models (Diffusion Models) for Speech Enhancement and Dereverberation

Speech Enhancement and Dereverberation with Diffusion-based Generative Models

Diffusion process on a spectrogram: In the forward process noise is gradually added to the clean speech spectrogram x0, while the reverse process learns to generate clean speech in an iterative fashion starting from the corrupted signal xT.

This repository contains the official PyTorch implementations for the 2022 papers:

Audio examples and further supplementary materials are available on our project page.

Follow-up work

Please also check out our follow-up work with code available:

Installation

  • Create a new virtual environment with Python 3.8 (we have not tested other Python versions, but they may work).
  • Install the package dependencies via pip install -r requirements.txt.
  • If using W&B logging (default):
    • Set up a wandb.ai account
    • Log in via wandb login before running our code.
  • If not using W&B logging:
    • Pass the option --no_wandb to train.py.
    • Your logs will be stored as local TensorBoard logs. Run tensorboard --logdir logs/ to see them.

Pretrained checkpoints

  • For the Speech Enhancement task, we provide pretrained checkpoints for the models trained on VoiceBank-DEMAND and WSJ0-CHiME3, as in the paper. They can be downloaded here.
  • For the Dereverberation task, we provide a checkpoint trained on our WSJ0-REVERB dataset. It can be downloaded here.
    • Note that this checkpoint works better with sampler settings --N 50 --snr 0.33.

Usage:

  • For resuming training, you can use the --resume_from_checkpoint option of train.py.
  • For evaluating these checkpoints, use the --ckpt option of enhancement.py (see section Evaluation below).

Training

Training is done by executing train.py. A minimal running example with default settings (as in our paper [2]) can be run with

python train.py --base_dir <your_base_dir>

where your_base_dir should be a path to a folder containing subdirectories train/ and valid/ (optionally test/ as well). Each subdirectory must itself have two subdirectories clean/ and noisy/, with the same filenames present in both. We currently only support training with .wav files.

To see all available training options, run python train.py --help. Note that the available options for the SDE and the backbone network change depending on which SDE and backbone you use. These can be set through the --sde and --backbone options.

Note:

  • Our journal preprint [2] uses --backbone ncsnpp.
  • Our Interspeech paper [1] uses --backbone dcunet. You need to pass --n_fft 512 to make it work.
    • Also note that the default parameters for the spectrogram transformation in this repository are slightly different from the ones listed in the first (Interspeech) paper (--spec_factor 0.15 rather than --spec_factor 0.333), but we've found the value in this repository to generally perform better for both models [1] and [2].

Evaluation

To evaluate on a test set, run

python enhancement.py --test_dir <your_test_dir> --enhanced_dir <your_enhanced_dir> --ckpt <path_to_model_checkpoint>

to generate the enhanced .wav files, and subsequently run

python calc_metrics.py --test_dir <your_test_dir> --enhanced_dir <your_enhanced_dir>

to calculate and output the instrumental metrics.

Both scripts should receive the same --test_dir and --enhanced_dir parameters. The --cpkt parameter of enhancement.py should be the path to a trained model checkpoint, as stored by the logger in logs/.

Citations / References

We kindly ask you to cite our papers in your publication when using any of our research or code:

@inproceedings{welker22speech,
  author={Simon Welker and Julius Richter and Timo Gerkmann},
  title={Speech Enhancement with Score-Based Generative Models in the Complex {STFT} Domain},
  year={2022},
  booktitle={Proc. Interspeech 2022},
  pages={2928--2932},
  doi={10.21437/Interspeech.2022-10653}
}
@article{richter2023speech,
  title={Speech Enhancement and Dereverberation with Diffusion-based Generative Models},
  author={Richter, Julius and Welker, Simon and Lemercier, Jean-Marie and Lay, Bunlong and Gerkmann, Timo},
  journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},
  volume={31},
  pages={2351-2364},
  year={2023},
  doi={10.1109/TASLP.2023.3285241}
}

[1] Simon Welker, Julius Richter, Timo Gerkmann. "Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain", ISCA Interspeech, Incheon, Korea, Sep. 2022.

[2] Julius Richter, Simon Welker, Jean-Marie Lemercier, Bunlong Lay, Timo Gerkmann. "Speech Enhancement and Dereverberation with Diffusion-Based Generative Models", IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 31, pp. 2351-2364, 2023.

More Repositories

1

storm

StoRM: A Diffusion-based Stochastic Regeneration Model for Speech Enhancement and Dereverberation
Python
149
star
2

mp-gtf

Multi-Phase Gammatone Filterbank (MP-GTF) construction for Python
Python
46
star
3

deep-non-linear-filter

Python
39
star
4

dual-path-rnn

Dual-Path RNN for Single-Channel Speech Separation (in Keras-Tensorflow2)
Python
34
star
5

buddy

BUDDy: Single-Channel Blind Unsupervised Dereverberation with Diffusion Models
Python
30
star
6

sgmse-bbed

TODO
Python
28
star
7

uncertainty-SE

Python
16
star
8

stcn-nmf

VAE and STCN with NMF for single-channel speech enhancement
Python
13
star
9

sgmse_crp

Python
13
star
10

derevdps

Python
12
star
11

diffphase

DiffPhase: Generative Diffusion-based STFT Phase Retrieval
Python
10
star
12

ears_benchmark

Generation scripts for EARS-WHAM and EARS-Reverb
Python
9
star
13

guided-vae-nmf

This is the repository of the paper
Jupyter Notebook
7
star
14

audio-visual-vad

Re-implementation of the paper "An End-to-End Multimodal Voice Activity Detection Using WaveNet Encoder and Residual Networks"
Python
6
star
15

disentangled-vae

Repository for the paper "Disentanglement Learning for Variational Autoencoders Applied to Audio-Visual Speech Enhancement".
Python
6
star
16

driftrec

DriftRec: Adapting diffusion models to blind image restoration tasks
Python
4
star
17

WSJ0-mixK-Dataset-Creation

MATLAB
4
star
18

label-uncertainty-ser

Python
3
star
19

livepty

Live Iterative Ptychography with projection-based algorithms
Python
3
star
20

av-phoneme

Continous Phoneme Recognition based on Audio-Visual Modality Fusion
Python
2
star
21

ears_dataset

HTML
2
star
22

2sderev

Two-stage Dereverberation Algorithm using DNN-supported multi-channel linear filtering and single-channel non-linear post-filtering
Python
2
star