• Stars
    star
    504
  • Rank 84,994 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created over 6 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

PyTorch implementations of neural network models for keyword spotting

Honk: CNNs for Keyword Spotting

Honk is a PyTorch reimplementation of Google's TensorFlow convolutional neural networks for keyword spotting, which accompanies the recent release of their Speech Commands Dataset. For more details, please consult our writeup:

Honk is useful for building on-device speech recognition capabilities for interactive intelligent agents. Our code can be used to identify simple commands (e.g., "stop" and "go") and be adapted to detect custom "command triggers" (e.g., "Hey Siri!").

Check out this video for a demo of Honk in action!

Demo Application

Use the instructions below to run the demo application (shown in the above video) yourself!

Currently, PyTorch has official support for only Linux and OS X. Thus, Windows users will not be able to run this demo easily.

To deploy the demo, run the following commands:

  • If you do not have PyTorch, please see the website.
  • Install Python dependencies: pip install -r requirements.txt
  • Install GLUT (OpenGL Utility Toolkit) through your package manager (e.g. apt-get install freeglut3-dev)
  • Fetch the data and models: ./fetch_data.sh
  • Start the PyTorch server: python .
  • Run the demo: python utils/speech_demo.py

If you need to adjust options, like turning off CUDA, please edit config.json.

Additional notes for Mac OS X:

  • GLUT is already installed on Mac OS X, so that step isn't needed.
  • If you have issues installing pyaudio, this may be the issue.

Server

Setup and deployment

python . deploys the web service for identifying if audio contain the command word. By default, config.json is used for configuration, but that can be changed with --config=<file_name>. If the server is behind a firewall, one workflow is to create an SSH tunnel and use port forwarding with the port specified in config (default 16888).

In our honk-models repository, there are several pre-trained models for Caffe2 (ONNX) and PyTorch. The fetch_data.sh script fetches these models and extracts them to the model directory. You may specify which model and backend to use in the config file's model_path and backend, respectively. Specifically, backend can be either caffe2 or pytorch, depending on what format model_path is in. Note that, in order to run our ONNX models, the packages onnx and onnx_caffe2 must be present on your system; these are absent in requirements.txt.

Raspberry Pi (RPi) Infrastructure Setup

Unfortunately, getting the libraries to work on the RPi, especially librosa, isn't as straightforward as running a few commands. We outline our process, which may or may not work for you.

  1. Obtain an RPi, preferably an RPi 3 Model B running Raspbian. Specifically, we used this version of Raspbian Stretch.
  2. Install dependencies: sudo apt-get install -y protobuf-compiler libprotoc-dev python-numpy python-pyaudio python-scipy python-sklearn
  3. Install Protobuf: pip install protobuf
  4. Install ONNX without dependencies: pip install --no-deps onnx
  5. Follow the official instructions for installing Caffe2 on Raspbian. This process takes about two hours. You may need to add the caffe2 module path to the PYTHONPATH environment variable. For us, this was accomplished by export PYTHONPATH=$PYTHONPATH:/home/pi/caffe2/build
  6. Install the ONNX extension for Caffe2: pip install onnx-caffe2
  7. Install further requirements: pip install -r requirements_rpi.txt
  8. Install librosa: pip install --no-deps resampy librosa
  9. Try importing librosa: python -c "import librosa". It should throw an error regarding numba, since we haven't installed it.
  10. We haven't found a way to easily install numba on the RPi, so we need to remove it from resampy. For our setup, we needed to remove numba and @numba.jit from /home/pi/.local/lib/python2.7/site-packages/resampy/interpn.py
  11. All dependencies should now be installed. We should try deploying an ONNX model.
  12. Fetch the models and data: ./fetch_data.sh
  13. In config.json, change backend to caffe2 and model_path to model/google-speech-dataset-full.onnx.
  14. Deploy the server: python . If there are no errors, you have successfully deployed the model, accessible via port 16888 by default.
  15. Run the speech commands demo: python utils/speech_demo.py. You'll need a working microphone and speakers. If you're interacting with your RPi remotely, you can run the speech demo locally and specify the remote endpoint --server-endpoint=http://[RPi IP address]:16888.

Utilities

QA client

Unfortunately, the QA client has no support for the general public yet, since it requires a custom QA service. However, it can still be used to retarget the command keyword.

python client.py runs the QA client. You may retarget a keyword by doing python client.py --mode=retarget. Please note that text-to-speech may not work well on Linux distros; in this case, please supply IBM Watson credentials via --watson-username and --watson--password. You can view all the options by doing python client.py -h.

Training and evaluating the model

CNN models. python -m utils.train --type [train|eval] trains or evaluates the model. It expects all training examples to follow the same format as that of Speech Commands Dataset. The recommended workflow is to download the dataset and add custom keywords, since the dataset already contains many useful audio samples and background noise.

Residual models. We recommend the following hyperparameters for training any of our res{8,15,26}[-narrow] models on the Speech Commands Dataset:

python -m utils.train --wanted_words yes no up down left right on off stop go --dev_every 1 --n_labels 12 --n_epochs 26 --weight_decay 0.00001 --lr 0.1 0.01 0.001 --schedule 3000 6000 --model res{8,15,26}[-narrow]

For more information about our deep residual models, please see our paper:

There are command options available:

option input format default description
--audio_preprocess_type {MFCCs, PCEN} MFCCs type of audio preprocess to use
--batch_size [1, n) 100 the mini-batch size to use
--cache_size [0, inf) 32768 number of items in audio cache, consumes around 32 KB * n
--conv1_pool [1, inf) [1, inf) 2 2 the width and height of the pool filter
--conv1_size [1, inf) [1, inf) 10 4 the width and height of the conv filter
--conv1_stride [1, inf) [1, inf) 1 1 the width and length of the stride
--conv2_pool [1, inf) [1, inf) 1 1 the width and height of the pool filter
--conv2_size [1, inf) [1, inf) 10 4 the width and height of the conv filter
--conv2_stride [1, inf) [1, inf) 1 1 the width and length of the stride
--data_folder string /data/speech_dataset path to data
--dev_every [1, inf) 10 dev interval in terms of epochs
--dev_pct [0, 100] 10 percentage of total set to use for dev
--dropout_prob [0.0, 1.0) 0.5 the dropout rate to use
--gpu_no [-1, n] 1 the gpu to use
--group_speakers_by_id {true, false} true whether to group speakers across train/dev/test
--input_file string the path to the model to load
--input_length [1, inf) 16000 the length of the audio
--lr (0.0, inf) {0.1, 0.001} the learning rate to use
--type {train, eval} train the mode to use
--model string cnn-trad-pool2 one of cnn-trad-pool2, cnn-tstride-{2,4,8}, cnn-tpool{2,3}, cnn-one-fpool3, cnn-one-fstride{4,8}, res{8,15,26}[-narrow], cnn-trad-fpool3, cnn-one-stride1
--momentum [0.0, 1.0) 0.9 the momentum to use for SGD
--n_dct_filters [1, inf) 40 the number of DCT bases to use
--n_epochs [0, inf) 500 number of epochs
--n_feature_maps [1, inf) {19, 45} the number of feature maps to use for the residual architecture
--n_feature_maps1 [1, inf) 64 the number of feature maps for conv net 1
--n_feature_maps2 [1, inf) 64 the number of feature maps for conv net 2
--n_labels [1, n) 4 the number of labels to use
--n_layers [1, inf) {6, 13, 24} the number of convolution layers for the residual architecture
--n_mels [1, inf) 40 the number of Mel filters to use
--no_cuda switch false whether to use CUDA
--noise_prob [0.0, 1.0] 0.8 the probability of mixing with noise
--output_file string model/google-speech-dataset.pt the file to save the model to
--seed (inf, inf) 0 the seed to use
--silence_prob [0.0, 1.0] 0.1 the probability of picking silence
--test_pct [0, 100] 10 percentage of total set to use for testing
--timeshift_ms [0, inf) 100 time in milliseconds to shift the audio randomly
--train_pct [0, 100] 80 percentage of total set to use for training
--unknown_prob [0.0, 1.0] 0.1 the probability of picking an unknown word
--wanted_words string1 string2 ... stringn command random the desired target words

JavaScript-based Keyword Spotting

Honkling is a JavaScript implementation of Honk. With Honkling, it is possible to implement various web applications with in-browser keyword spotting functionality.

Keyword Spotting Data Generator

In order to improve the flexibility of Honk and Honkling, we provide a program that constructs a dataset from youtube videos. Details can be found in keyword_spotting_data_generator folder

Recording audio

You may do the following to record sequential audio and save to the same format as that of speech command dataset:

python -m utils.record

Input return to record, up arrow to undo, and "q" to finish. After one second of silence, recording automatically halts.

Several options are available:

--output-begin-index: Starting sequence number
--output-prefix: Prefix of the output audio sequence
--post-process: How the audio samples should be post-processed. One or more of "trim" and "discard_true".

Post-processing consists of trimming or discarding "useless" audio. Trimming is self-explanatory: the audio recordings are trimmed to the loudest window of x milliseconds, specified by --cutoff-ms. Discarding "useless" audio (discard_true) uses a pre-trained model to determine which samples are confusing, discarding correctly labeled ones. The pre-trained model and correct label are defined by --config and --correct-label, respectively.

For example, consider python -m utils.record --post-process trim discard_true --correct-label no --config config.json. In this case, the utility records a sequence of speech snippets, trims them to one second, and finally discards those not labeled "no" by the model in config.json.

Listening to sound level

python manage_audio.py listen

This assists in setting sane values for --min-sound-lvl for recording.

Generating contrastive examples

python manage_audio.py generate-contrastive --directory [directory] generates contrastive examples from all .wav files in [directory] using phonetic segmentation.

Trimming audio

Speech command dataset contains one-second-long snippets of audio.

python manage_audio.py trim --directory [directory] trims to the loudest one-second for all .wav files in [directory]. The careful user should manually check all audio samples using an audio editor like Audacity.

More Repositories

1

pyserini

Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations.
Python
1,504
star
2

anserini

Anserini is a Lucene toolkit for reproducible information retrieval research
Java
986
star
3

daam

Diffusion attentive attribution maps for interpreting Stable Diffusion.
Jupyter Notebook
612
star
4

hedwig

PyTorch deep learning models for document classification
Python
588
star
5

docTTTTTquery

docTTTTTquery document expansion model
Python
346
star
6

pygaggle

a gaggle of deep neural architectures for text ranking and question answering, designed for Pyserini
Jupyter Notebook
322
star
7

BuboQA

Simple question answering over knowledge graphs (Mohammed et al., NAACL 2018)
Python
280
star
8

rank_llm

Repository for prompt-decoding using LLMs (GPT3.5, GPT4, Vicuna, and Zephyr)
Python
247
star
9

howl

Wake word detection modeling toolkit for Firefox Voice, supporting open datasets like Speech Commands and Common Voice.
Python
191
star
10

castor

PyTorch deep learning models for text processing
Python
180
star
11

DeeBERT

DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference
Python
150
star
12

birch

Document ranking via sentence modeling using BERT
Python
142
star
13

covidex

A multi-stage neural search engine for the COVID-19 Open Research Dataset
TypeScript
136
star
14

duobert

Multi-stage passage ranking: monoBERT + duoBERT
Python
109
star
15

MP-CNN-Torch

Multi-Perspective Convolutional Neural Networks for modeling textual similarity (He et al., EMNLP 2015)
Lua
107
star
16

anserini-notebooks

Anserini notebooks
Jupyter Notebook
69
star
17

mr.tydi

Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages.
Python
68
star
18

honkling

Web app for keyword spotting using TensorflowJS
JavaScript
68
star
19

afriberta

AfriBERTa: Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages
Python
60
star
20

data

Castorini data
Python
59
star
21

dhr

Dense hybrid representations for text retrieval
Python
55
star
22

NCE-CNN-Torch

Noise-Contrastive Estimation for Question Answering with Convolutional Neural Networks (Rao et al. CIKM 2016)
Lua
54
star
23

chatty-goose

A Python framework for conversational search
Python
38
star
24

transformers-arithmetic

Python
38
star
25

d-bert

Distilling BERT using natural language generation.
Python
35
star
26

hf-spacerini

Plug-and-play Search Interfaces with Pyserini and Hugging Face
Python
30
star
27

SimpleDBpediaQA

simple QA over knowledge graphs on DBpedia
Python
25
star
28

bertserini

BERTserini
Python
24
star
29

anserini-tools

Evaluation tools shared across anserini, pyserini, and pygaggle
Python
22
star
30

berxit

Python
21
star
31

onboarding

Onboarding guide to Jimmy Lin's research group at the University of Waterloo
21
star
32

VDPWI-NN-Torch

Very Deep Pairwise Word Interaction Neural Networks for modeling textual similarity (He and Lin, NAACL/HLT 2016)
Lua
19
star
33

perm-sc

Official codebase for permutation self-consistency.
Python
15
star
34

TREC-COVID

TREC-COVID results - this is a mirror of data on the TREC website in a more convenient format.
Roff
14
star
35

LiT5

Python
13
star
36

honk-models

Pre-trained models for Honk
11
star
37

howl-deploy

JavaScript deployment for Howl, the wake word detection modeling toolkit for Firefox Voice
JavaScript
10
star
38

TrecQA-NegEx

Code and dataset for SIGIR 2017 short paper "Automatically Extracting High-Quality Negative Examples for Answer Selection in Question Answering"
Python
10
star
39

Tweets2013-IA

The Tweets2013 Internet Archive collection
Scala
10
star
40

AfriTeVa-keji

Python
10
star
41

meanmax

MeanMax estimators.
Python
9
star
42

cqe

Python
9
star
43

SM-CNN-Torch

Torch implementation of Severyn and Moschitti's SIGIR 2015 CNN model for question answering
Lua
9
star
44

ONNX-demo

Python
8
star
45

anserini-notebooks-afirm2020

Colab notebooks for AFIRM '20
Jupyter Notebook
7
star
46

serverless-bert-reranking

Python
7
star
47

parrot

Keyword spotting using audio from speech synthesis services and YouTube
Python
7
star
48

earlyexiting-monobert

Python
7
star
49

afriteva

Text - 2 - Text for African languages
Python
6
star
50

tct_colbert

Python
6
star
51

transformers-selective

Python
5
star
52

serverless-inference

Neural network inference on serverless architecture
Python
5
star
53

norbert

NorBERT: Anserini + dl4marco-bert
Python
4
star
54

rank_llm_data

3
star
55

touche-error-analysis

Old is Gold? Systematic Error Analysis of Neural Retrieval Models against BM25 for Argument Retrieval
Python
3
star
56

numbert

Passage Ranking Library using various pretrained LMs
Python
3
star
57

anserini-spark

Anserini-Spark integration
Java
3
star
58

kim-cnn-vis

An in-browser visualization of Kim CNN
JavaScript
3
star
59

replicate-lce

Python
3
star
60

kws-gen-data

Data for KWS generator.
2
star
61

pyserini-data

Python
2
star
62

candle

PyTorch utilities for parameter pruning and multiplies reduction
Python
2
star
63

BuboQA-models

2
star
64

gooselight2

Search frontend for Anserini
Ruby
2
star
65

africlirmatrix

AfriCLIRMatrix is a test collection for cross-lingual information retrieval research in 15 diverse African languages.
2
star
66

biasprobe

Python
2
star
67

sigtestv

SIGnificance TESTing Violations: an end-to-end toolkit for evaluating neural networks.
Python
1
star
68

howl-models

1
star
69

SolrAnserini

Anserini integration with Solr
Python
1
star
70

gooselight

🦆 Anserini + Blacklight 🦆
Ruby
1
star
71

BuboQA-data

Hosting dataset for BuboQA
1
star
72

anlessini

Java
1
star
73

honkling-models

JavaScript
1
star