• Stars
    star
    275
  • Rank 148,889 (Top 3 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 2 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

This toolbox aims to unify audio generation model evaluation for easier comparison.

Audio Generation Evaluation

This toolbox aims to unify audio generation model evaluation for easier future comparison.

Quick Start

First, prepare the environment

pip install git+https://github.com/haoheliu/audioldm_eval

Second, generate test dataset by

python3 gen_test_file.py

Finally, perform a test run. A result for reference is attached here.

python3 test.py # Evaluate and save the json file to disk (example/paired.json)

Evaluation metrics

We have the following metrics in this toolbox:

  • Recommanded:
    • FAD: Frechet audio distance
    • ISc: Inception score
  • Other for references:
    • FD: Frechet distance, realized by PANNs, a state-of-the-art audio classification model
    • KID: Kernel inception score
    • KL: KL divergence (softmax over logits)
    • KL_Sigmoid: KL divergence (sigmoid over logits)
    • PSNR: Peak signal noise ratio
    • SSIM: Structural similarity index measure
    • LSD: Log-spectral distance

The evaluation function will accept the paths of two folders as main parameters.

  1. If two folder have files with same name and same numbers of files, the evaluation will run in paired mode.
  2. If two folder have different numbers of files or files with different name, the evaluation will run in unpaired mode.

These metrics will only be calculated in paried mode: KL, KL_Sigmoid, PSNR, SSIM, LSD. In the unpaired mode, these metrics will return minus one.

Evaluation on AudioCaps and AudioSet

The AudioCaps test set consists of audio files with multiple text annotations. To evaluate the performance of AudioLDM, we randomly selected one annotation per audio file, which can be found in the accompanying json file.

Given the size of the AudioSet evaluation set with approximately 20,000 audio files, it may be impractical for audio generative models to perform evaluation on the entire set. As a result, we randomly selected 2,000 audio files for evaluation, with the corresponding annotations available in a json file.

For more information on our evaluation process, please refer to our paper.

Example

import torch
from audioldm_eval import EvaluationHelper

# GPU acceleration is preferred
device = torch.device(f"cuda:{0}")

generation_result_path = "example/paired"
target_audio_path = "example/reference"

# Initialize a helper instance
evaluator = EvaluationHelper(16000, device)

# Perform evaluation, result will be print out and saved as json
metrics = evaluator.main(
    generation_result_path,
    target_audio_path,
    limit_num=None # If you only intend to evaluate X (int) pairs of data, set limit_num=X
)

Note

  • Update on 24 June 2023:
    • Issues on model evaluation: I found the PANNs based Frechet Distance and KL score is not as robust as FAD sometimes. For example, when the generation are all silent audio, the FAD and KL still indicate model perform very well, while FAD and Inception Score (IS) can still reflect the model true bad performance. Sometimes the resample method on audio can significantly affect the FD (+-30) and KL (+-0.4) performance as well.

      • To address this issue, in another branch of this repo (passt_replace_panns), I change the PANNs model to Passt, which I found to be more robust to resample method and other trival mismatches.
    • Update on code: The calculation of FAD is slow. Now, after each calculation of a folder, the code will save the FAD feature into an .npy file for later reference.

TODO

  • Add pretrained AudioLDM model.
  • Add CLAP score

Cite this repo

If you found this tool useful, please consider citing

@article{liu2023audioldm,
  title={AudioLDM: Text-to-Audio Generation with Latent Diffusion Models},
  author={Liu, Haohe and Chen, Zehua and Yuan, Yi and Mei, Xinhao and Liu, Xubo and Mandic, Danilo and Wang, Wenwu and Plumbley, Mark D},
  journal={arXiv preprint arXiv:2301.12503},
  year={2023}
}

Reference

https://github.com/toshas/torch-fidelity

https://github.com/v-iashin/SpecVQGAN

More Repositories

1

AudioLDM

AudioLDM: Generate speech, sound effects, music and beyond, with text.
Python
2,310
star
2

AudioLDM2

Text-to-Audio/Music Generation
Python
2,187
star
3

versatile_audio_super_resolution

Versatile audio super resolution (any -> 48kHz) with AudioSR.
Python
963
star
4

voicefixer

General Speech Restoration
Python
952
star
5

voicefixer_main

General Speech Restoration
Python
271
star
6

AudioLDM-training-finetuning

AudioLDM training, finetuning, evaluation and inference.
Python
165
star
7

ssr_eval

Evaluation and Benchmarking of Speech Super-resolution Methods
Python
129
star
8

2021-ISMIR-MSS-Challenge-CWS-PResUNet

Music Source Separation; Train & Eval & Inference piplines and pretrained models we used for 2021 ISMIR MDX Challenge.
Python
113
star
9

SemantiCodec-inference

Ultra-low bitrate neural audio codec (0.31~1.40 kbps) with a better semantic in the latent space.
Python
111
star
10

Subband-Music-Separation

Pytorch: Channel-wise subband (CWS) input for better voice and accompaniment separation
Python
89
star
11

torchsubband

Pytorch implementation of subband decomposition
HTML
78
star
12

SemantiCodec

HTML
37
star
13

diffres-python

Learning differentiable temporal resolution on time-series data.
Python
30
star
14

DCASE_2022_Task_5

System that ranks 2nd in DCASE 2022 Challenge Task 5: Few-shot Bioacoustic Event Detection
Python
27
star
15

ontology-aware-audio-tagging

Python
13
star
16

courseProject_Compiler

java implementation of NWPU Compiler course project-西工大编译原理-试点班
Java
13
star
17

Key-word-spotting-DNN-GRU-DSCNN

key word spotting GRU/DNN/DSCNN
Python
8
star
18

DM_courseProject

KNN Bayes 西北工业大学 NWPU 数据挖掘与分析
Python
6
star
19

netease_downloader

网易云音乐上以歌单为单位进行下载
Python
3
star
20

Channel-wise-Subband-Input

The demos of paper: Channel-wise Subband Input for Better Voice and Accompaniment Separation on High Resolution Music
Jupyter Notebook
2
star
21

haoheliu.github.io

SCSS
1
star
22

demopage-NVSR

HTML
1
star
23

deepDecagon

Python
1
star
24

visa-monitor

实时监控可预约签证的时间,有更早的就邮件通知
Python
1
star
25

colab_collection

Jupyter Notebook
1
star
26

SatProj

西北工业大学应用综合实验
Python
1
star
27

demopage-voicefixer

Voicefixer is a speech restoration model that handles noise, reverberation, low resolution (2kHz~44.1kHz), and clipping (0.1-1.0 threshold) distortion simultaneously.
HTML
1
star
28

mushra_test_2024_April

1
star