• Stars
    star
    198
  • Rank 196,898 (Top 4 %)
  • Language
    Python
  • License
    Mozilla Public Li...
  • Created over 4 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Wake word detection modeling toolkit for Firefox Voice, supporting open datasets like Speech Commands and Common Voice.

Howl

PyPI License: MPL 2.0

Wake word detection modeling for Firefox Voice, supporting open datasets like Google Speech Commands and Mozilla Common Voice.

Citation:

@inproceedings{tang-etal-2020-howl,
    title = "Howl: A Deployed, Open-Source Wake Word Detection System",
    author = "Tang, Raphael and Lee, Jaejun and Razi, Afsaneh and Cambre, Julia and Bicking, Ian and Kaye, Jofish and Lin, Jimmy",
    booktitle = "Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS)",
    month = nov,
    year = "2020",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.nlposs-1.9",
    doi = "10.18653/v1/2020.nlposs-1.9",
    pages = "61--65"
}

Training Guide

Installation

  1. git clone https://github.com/castorini/howl && cd howl

  2. Install PyTorch by following your platform-specific instructions.

  3. Install PyAudio and its dependencies through your distribution's package system.

  4. pip install -r requirements.txt -r requirements_training.txt (some apt packages might need to be installed)

  5. ./download_mfa.sh to setup montreal forced aligner (MFA) for dataset generation

Preparing a Dataset

Generating a dataset for a custom wakeword requires three steps:

  1. Generating raw audio dataset that howl can load from open datasets
  2. Generate orthographic transcription alignments for each audio file.
  3. Attach the alignment to the raw audio dataset generated in step 1.

Having said that we recommend Common Voice dataset for the open audio datasets and Montreal Forced Aligner (MFA) for the transcription alignment. Downloading MFA can be achieved simply running download_mfa.sh script. Along with the aligner, the script will download necessary English pronunciation dictionary.

Once they are ready, a dataset can be generated using the following script.

./generate_dataset.sh <common voice dataset path> <underscore separated wakeword (e.g. hey_fire_fox)> <inference sequence (e.g. [0,1,2])> <(Optional) "true" to skip negative dataset generation>

For detailed explanation, please refer to How to generate a dataset for custom wakeword

Training and Running a Model

  1. Source the relevant environment variables for training the res8 model: source envs/res8.env.
  2. Train the model: python -m training.run.train -i datasets/fire/positive datasets/fire/negative --model res8 --workspace workspaces/fire-res8. It's recommended to also use --use-stitched-datasets if the training datasets are small.
  3. For the CLI demo, run python -m training.run.demo --model res8 --workspace workspaces/fire-res8.

train_model.sh is also available which encaspulates individual command into a single bash script

./train_model.sh <env file path (e.g. envs/res8.env)> <model type (e.g. res8)> <workspace path (e.g. workspaces/fire-res8)> <dataset1 (e.g. datasets/fire-positive)> <dataset2 (e.g. datasets/fire-negative)> ...

Pretrained Models

howl-models contains workspaces with pretrained models

To get the latest models, simply run git submodule update --init --recursive

VOCAB='["hey","fire","fox"]' INFERENCE_SEQUENCE=[0,1,2] INFERENCE_THRESHOLD=0 NUM_MELS=40 MAX_WINDOW_SIZE_SECONDS=0.5 python -m training.run.demo --model res8 --workspace howl-models/howl/hey-fire-fox

Installing Howl using pip

  1. Install PyAudio and PyTorch 1.5+ through your distribution's package system.

  2. Install Howl using pip

pip install howl
  1. To immediately use a pre-trained Howl model for inference, we provide the client API. The following example (also found under examples/hey_fire_fox.py) loads the "hey_fire_fox" pretrained model with a simple callback and starts the inference client.
from howl.client import HowlClient

def hello_callback(detected_words):
    print("Detected: {}".format(detected_words))

client = HowlClient()
client.from_pretrained("hey_fire_fox", force_reload=False)
client.add_listener(hello_callback)
client.start().join()

Reproducing Paper Results

First, follow the installation instructions in the quickstart guide.

Google Speech Commands

  1. Download the Google Speech Commands dataset and extract it.
  2. Source the appropriate environment variables: source envs/res8.env
  3. Set the dataset path to the root folder of the Speech Commands dataset: export DATASET_PATH=/path/to/dataset
  4. Train the res8 model: NUM_EPOCHS=20 MAX_WINDOW_SIZE_SECONDS=1 VOCAB='["yes","no","up","down","left","right","on","off","stop","go"]' BATCH_SIZE=64 LR_DECAY=0.8 LEARNING_RATE=0.01 python -m training.run.pretrain_gsc --model res8

Hey Firefox

  1. Download the Hey Firefox corpus, licensed under CC0, and extract it.
  2. Download our noise dataset, built from Microsoft SNSD and MUSAN, and extract it.
  3. Source the appropriate environment variables: source envs/res8.env
  4. Set the noise dataset path to the root folder: export NOISE_DATASET_PATH=/path/to/snsd
  5. Set the firefox dataset path to the root folder: export DATASET_PATH=/path/to/hey_firefox
  6. Train the model: LR_DECAY=0.98 VOCAB='["hey","fire","fox"]' USE_NOISE_DATASET=True BATCH_SIZE=16 INFERENCE_THRESHOLD=0 NUM_EPOCHS=300 NUM_MELS=40 INFERENCE_SEQUENCE=[0,1,2] MAX_WINDOW_SIZE_SECONDS=0.5 python -m training.run.train --model res8 --workspace workspaces/hey-ff-res8

Hey Snips

  1. Download hey snips dataset
  2. Process the dataset to a format howl can load
VOCAB='["hey","snips"]' INFERENCE_SEQUENCE=[0,1] DATASET_PATH=datasets/hey-snips python -m training.run.deprecated.create_raw_dataset --dataset-type 'hey-snips' -i ~/path/to/hey_snips_dataset
  1. Generate some mock alignment for the dataset, where we don't care about alignment:
python -m training.run.attach_alignment \
  --input-raw-audio-dataset datasets/hey-snips \
  --token-type word \
  --alignment-type stub
  1. Use MFA to generate alignment for the dataset set:
mfa_align datasets/hey-snips/audio eng.dict pretrained_models/english.zip datasets/hey-snips/alignments
  1. Attach the MFA alignment to the dataset:
python -m training.run.attach_alignment \
  --input-raw-audio-dataset datasets/hey-snips \
  --token-type word \
  --alignment-type mfa \
  --alignments-path datasets/hey-snips/alignments
  1. Source the appropriate environment variables: source envs/res8.env
  2. Set the noise dataset path to the root folder: export NOISE_DATASET_PATH=/path/to/snsd
  3. Set the noise dataset path to the root folder: export DATASET_PATH=/path/to/hey-snips
  4. Train the model: LR_DECAY=0.98 VOCAB='["hey","snips"]' USE_NOISE_DATASET=True BATCH_SIZE=16 INFERENCE_THRESHOLD=0 NUM_EPOCHS=300 NUM_MELS=40 INFERENCE_SEQUENCE=[0,1] MAX_WINDOW_SIZE_SECONDS=0.5 python -m training.run.train --model res8 --workspace workspaces/hey-snips-res8

Generating dataset for Mycroft-precise

howl also provides a script for transforming howl dataset to mycroft-precise dataset

VOCAB='["hey","fire","fox"]' INFERENCE_SEQUENCE=[0,1,2] python -m training.run.generate_precise_dataset --dataset-path /path/to/howl_dataset

Experiments

To verify the correctness of our implementation, we first train and evaluate our models on the Google Speech Commands dataset, for which there exists many known results. Next, we curate a wake word detection datasets and report our resulting model quality.

For both experiments, we generate reports in excel format. experiments folder includes sample outputs from the for each experiment and corresponding workspaces can be found here

commands_recognition

For command recognition, we train the four different models (res8, LSTM, LAS encoder, MobileNetv2) to detect twelve different keywords: β€œyes”, β€œno”, β€œup”, β€œdown”, β€œleft”, β€œright”, β€œon”, β€œoff”, β€œstop”, β€œgo”, unknown, or silence.

python -m training.run.eval_commands_recognition --num_iterations n --dataset_path < path_to_gsc_datasets >

word_detection

In this experiment, we train our best commands recognition model, res8, for hey firefox and hey snips and evaluate them with different threashold.

Two different performance reports are generated, one with the clean audio and one with audios with noise

python -m training.run.eval_wake_word_detection --num_models n --hop_size < number between 0 and 1 > --exp_type < hey_firefox | hey_snips > --dataset_path "x" --noiseset_path "y"

We also provide a script for generating ROC curve. exp_timestamp can be found from the reports generated from previous command

python -m training.run.generate_roc --exp_timestamp < experiment timestamp > --exp_type < hey_firefox | hey_snips >

More Repositories

1

pyserini

Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations.
Python
1,640
star
2

anserini

Anserini is a Lucene toolkit for reproducible information retrieval research
Java
1,025
star
3

daam

Diffusion attentive attribution maps for interpreting Stable Diffusion.
Jupyter Notebook
657
star
4

hedwig

PyTorch deep learning models for document classification
Python
591
star
5

honk

PyTorch implementations of neural network models for keyword spotting
Python
511
star
6

docTTTTTquery

docTTTTTquery document expansion model
Python
351
star
7

pygaggle

a gaggle of deep neural architectures for text ranking and question answering, designed for Pyserini
Jupyter Notebook
339
star
8

rank_llm

Repository for prompt-decoding using LLMs (GPT3.5, GPT4, Vicuna, and Zephyr)
Python
282
star
9

BuboQA

Simple question answering over knowledge graphs (Mohammed et al., NAACL 2018)
Python
281
star
10

castor

PyTorch deep learning models for text processing
Python
179
star
11

DeeBERT

DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference
Python
152
star
12

birch

Document ranking via sentence modeling using BERT
Python
143
star
13

covidex

A multi-stage neural search engine for the COVID-19 Open Research Dataset
TypeScript
137
star
14

duobert

Multi-stage passage ranking: monoBERT + duoBERT
Python
112
star
15

MP-CNN-Torch

Multi-Perspective Convolutional Neural Networks for modeling textual similarity (He et al., EMNLP 2015)
Lua
107
star
16

mr.tydi

Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages.
Python
70
star
17

anserini-notebooks

Anserini notebooks
Jupyter Notebook
69
star
18

honkling

Web app for keyword spotting using TensorflowJS
JavaScript
69
star
19

afriberta

AfriBERTa: Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages
Python
66
star
20

dhr

Dense hybrid representations for text retrieval
Python
59
star
21

data

Castorini data
Python
59
star
22

NCE-CNN-Torch

Noise-Contrastive Estimation for Question Answering with Convolutional Neural Networks (Rao et al. CIKM 2016)
Lua
54
star
23

chatty-goose

A Python framework for conversational search
Python
40
star
24

transformers-arithmetic

Python
38
star
25

d-bert

Distilling BERT using natural language generation.
Python
35
star
26

hf-spacerini

Plug-and-play Search Interfaces with Pyserini and Hugging Face
Python
32
star
27

ragnarok

Retrieval-Augmented Generation battle!
Python
32
star
28

anserini-tools

Evaluation tools shared across anserini, pyserini, and pygaggle
Python
28
star
29

bertserini

BERTserini
Python
25
star
30

SimpleDBpediaQA

simple QA over knowledge graphs on DBpedia
Python
25
star
31

onboarding

Onboarding guide to Jimmy Lin's research group at the University of Waterloo
24
star
32

berxit

Python
21
star
33

umbrela

Python
20
star
34

VDPWI-NN-Torch

Very Deep Pairwise Word Interaction Neural Networks for modeling textual similarity (He and Lin, NAACL/HLT 2016)
Lua
19
star
35

perm-sc

Official codebase for permutation self-consistency.
Python
16
star
36

LiT5

Python
15
star
37

TREC-COVID

TREC-COVID results - this is a mirror of data on the TREC website in a more convenient format.
Roff
14
star
38

honk-models

Pre-trained models for Honk
11
star
39

howl-deploy

JavaScript deployment for Howl, the wake word detection modeling toolkit for Firefox Voice
JavaScript
10
star
40

Tweets2013-IA

The Tweets2013 Internet Archive collection
Scala
10
star
41

AfriTeVa-keji

Python
10
star
42

TrecQA-NegEx

Code and dataset for SIGIR 2017 short paper "Automatically Extracting High-Quality Negative Examples for Answer Selection in Question Answering"
Python
10
star
43

meanmax

MeanMax estimators.
Python
9
star
44

cqe

Python
9
star
45

SM-CNN-Torch

Torch implementation of Severyn and Moschitti's SIGIR 2015 CNN model for question answering
Lua
9
star
46

ONNX-demo

Python
8
star
47

anserini-notebooks-afirm2020

Colab notebooks for AFIRM '20
Jupyter Notebook
7
star
48

serverless-bert-reranking

Python
7
star
49

parrot

Keyword spotting using audio from speech synthesis services and YouTube
Python
7
star
50

touche-error-analysis

A reproduction study of the TouchΓ© 2020 dataset in the BEIR benchmark
Python
7
star
51

earlyexiting-monobert

Python
7
star
52

afriteva

Text - 2 - Text for African languages
Python
6
star
53

tct_colbert

Python
6
star
54

transformers-selective

Python
5
star
55

serverless-inference

Neural network inference on serverless architecture
Python
5
star
56

norbert

NorBERT: Anserini + dl4marco-bert
Python
4
star
57

anserini-spark

Anserini-Spark integration
Java
3
star
58

rank_llm_data

3
star
59

numbert

Passage Ranking Library using various pretrained LMs
Python
3
star
60

kim-cnn-vis

An in-browser visualization of Kim CNN
JavaScript
3
star
61

replicate-lce

Python
3
star
62

kws-gen-data

Data for KWS generator.
2
star
63

pyserini-data

Python
2
star
64

BuboQA-models

2
star
65

candle

PyTorch utilities for parameter pruning and multiplies reduction
Python
2
star
66

gooselight2

Search frontend for Anserini
Ruby
2
star
67

africlirmatrix

AfriCLIRMatrix is a test collection for cross-lingual information retrieval research in 15 diverse African languages.
2
star
68

biasprobe

Python
2
star
69

sigtestv

SIGnificance TESTing Violations: an end-to-end toolkit for evaluating neural networks.
Python
1
star
70

howl-models

1
star
71

SolrAnserini

Anserini integration with Solr
Python
1
star
72

gooselight

πŸ¦† Anserini + Blacklight πŸ¦†
Ruby
1
star
73

anlessini

Java
1
star
74

honkling-models

JavaScript
1
star
75

BuboQA-data

Hosting dataset for BuboQA
1
star
76

ragnarok_data

1
star