• This repository has been archived on 12/Feb/2022
  • Stars
    star
    144
  • Rank 254,904 (Top 6 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 7 years ago
  • Updated about 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Deep Neural Network for Speaker Count Estimation

Speaker Count Estimation using Deep Neural Networks

screen shot 2017-11-21 at 12 35 28

CountNet is a deep learning model to estimate the number of concurrent speakers from single channel mixtures is a very challenging task that is a mandatory first step to address any realistic “cocktail-party” scenario. It has various audio-based applications such as blind source separation, speaker diarisation, and audio surveillance.

This repo provides pre-trained models.

Publications

2019: IEEE/ACM Transactions on Audio, Speech, and Language Processing

  • Title: CountNet: Estimating the Number of Concurrent Speakers Using Supervised Learning Speaker Count Estimation
  • Authors: Fabian-Robert Stöter, Soumitro Chakrabarty, Bernd Edler, Emanuël A. P. Habets
  • Preprint: HAL
  • Proceedings: IEEE (paywall)

2018: ICASSP

  • Title: Classification vs. Regression in Supervised Learning for Single Channel Speaker Count Estimation
  • Authors: Fabian-Robert Stöter, Soumitro Chakrabarty, Bernd Edler, Emanuël A. P. Habets
  • Preprint: arXiv 1712.04555
  • Proceedings: IEEE (paywall)

Demos

A demo video is provided on the accompanying website.

Usage

This repository provides the keras model to be used from Python to infer count estimates. The preprocessing dependes on librosa and scikit-learn. Not that the provided model is trained on 16 kHz samples of 5 seconds duration.

Docker

Docker makes it easy to reproduce the results and install all requirements. If you have docker installed, run the following steps to predict a count from the provided test sample.

  • Build the docker image: docker build -t countnet .
  • Predict from example: docker run -i countnet python predict.py --model CRNN examples/5_speakers.wav

Manual Installation

To install the requirements using Anaconda Python, run

conda env create -f env.yml

You can now run the command line script and process wav files using the pre-trained model CRNN (best peformance).

python predict.py examples/5_speakers.wav --model CRNN

Reproduce Paper Results using the LibriCount Dataset

DOI

The full test dataset is available for download on Zenodo.

LibriCount10 0dB Dataset

The dataset contains a simulated cocktail party environment of [0..10] speakers, mixed with 0dB SNR from random utterances of different speakers from the LibriSpeech CleanTest dataset.

For each recording we provide the ground truth number of speakers within the file name, where k in, k_uniquefile.wav is the maximum number of concurrent speakers with the 5 seconds of recording.

All recordings are of 5s durations. For each unique recording, we provide the audio wave file (16bits, 16kHz, mono) and an annotation json file with the same name as the recording.

Metadata

In the annotation file we provide information about the speakers sex, their unique speaker_id, and vocal activity within the mixture recording in samples. Note that these were automatically generated using a voice activity detection method.

In the following example a speaker count of 3 speakers is the ground truth.

[
	{
		"sex": "F", 
		"activity": [[0, 51076], [51396, 55400], [56681, 80000]], 
		"speaker_id": 1221
	}, 
	{
		"sex": "F", 
		"activity": [[0, 51877], [56201, 80000]], 
		"speaker_id": 3570
	}, 
	{
		"sex": "M", 
		"activity": [[0, 15681], [16161, 68213], [73498, 80000]], 
		"speaker_id": 5105
	}
]

Running evaluation

python eval.py ~/path/to/LibriCount10-0dB --model CRNN outputs the mean absolute error per class and averaged.

Pretrained models

Name Number of Parameters MAE on test set
RNN 0.31M 0.38
F-CRNN 0.06M 0.36
CRNN 0.35M 0.27

FAQ

Is it possible to convert the model to run on a modern version of keras with tensorflow backend?

Yes, its possible. But I was unable to get identical results when converting model. I tried this guide but it still didn't help to get to the same performance compared to keras 1.2.2 and theano.

License

MIT

More Repositories

1

awesome-python-scientific-audio

Curated list of python software and packages related to scientific research in audio
1,445
star
2

python_audio_loading_benchmark

Benchmark popular audio i/o packages
Python
135
star
3

stempeg

Python I/O for STEM audio files
Python
87
star
4

reproducible-audio-research

List of Reproducible Audio Research Papers
70
star
5

dsdtools

Parse and process the demixing secrets dataset (DSD100)
Python
47
star
6

magiclock

Use haptic feedback to feel the MIDI clock beat underneath your magic trackpad
Objective-C
39
star
7

freezefx

Python audio freeze effect
Jupyter Notebook
28
star
8

commonfate

Python
17
star
9

nsynth-convert

NSynth for the rest of us
Jupyter Notebook
13
star
10

peaq-python

C
11
star
11

sisec-mus-website

Vue
8
star
12

chromeleiter

Realtime Chromagram on a Launchpad
C++
7
star
13

electracity

An Audacity replacement using Electron and Waveform-Playlist
Vue
6
star
14

stft-istft-experiments

find one stft to rule them all
Python
6
star
15

thesis

Ph.D. Thesis LaTeX Code
TeX
6
star
16

deejaypeg

🔈 ⃕ 🖼
Python
5
star
17

dsd100mat

Parse, process and evaluate the demixing secrets dataset (DSD100)
MATLAB
5
star
18

sweety

an intranet shopping system utilizing barcodes
Ruby
3
star
19

defense-slides

Ph.D. Defense Slides
JavaScript
3
star
20

keras2tikz

Generate tikz code for DNN layer diagrams
Python
2
star
21

SplitStems

Splits .mp4 Stems format into individual tracks
Shell
2
star
22

mdx-submissions21

TeX
2
star
23

udons

🍜 baseline model for hear challenge
Python
2
star
24

beta-nmf

Python and C++ implementations of Beta NMF. As described in http://perso.telecom-paristech.fr/~fevotte/Journals/neco09_is-nmf.pdf
C++
1
star
25

midihack

JavaScript
1
star
26

website

Shell
1
star
27

dsd100-loudness

Cross loudness measurements for dsd100 vocal tracks
Python
1
star
28

sisec-mus-results

SISEC MUS 2016 evaluation
Python
1
star