• Stars
    star
    8,292
  • Rank 4,447 (Top 0.09 %)
  • Language
    Python
  • License
    BSD 3-Clause "New...
  • Created over 10 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Speech recognition module for Python, supporting several engines and APIs, online and offline.

SpeechRecognition

Latest Version Development Status Supported Python Versions License Continuous Integration Test Results

Library for performing speech recognition, with support for several engines and APIs, online and offline.

UPDATE 2022-02-09: Hey everyone! This project started as a tech demo, but these days it needs more time than I have to keep up with all the PRs and issues. Therefore, I'd like to put out an open invite for collaborators - just reach out at [email protected] if you're interested!

Speech recognition engine/API support:

Quickstart: pip install SpeechRecognition. See the "Installing" section for more details.

To quickly try it out, run python -m speech_recognition after installing.

Project links:

Library Reference

The library reference documents every publicly accessible object in the library. This document is also included under reference/library-reference.rst.

See Notes on using PocketSphinx for information about installing languages, compiling PocketSphinx, and building language packs from online resources. This document is also included under reference/pocketsphinx.rst.

You have to install Vosk models for using Vosk. Here are models avaiable. You have to place them in models folder of your project, like "your-project-folder/models/your-vosk-model"

Examples

See the examples/ directory in the repository root for usage examples:

Installing

First, make sure you have all the requirements listed in the "Requirements" section.

The easiest way to install this is using pip install SpeechRecognition.

Otherwise, download the source distribution from PyPI, and extract the archive.

In the folder, run python setup.py install.

Requirements

To use all of the functionality of the library, you should have:

  • Python 3.8+ (required)
  • PyAudio 0.2.11+ (required only if you need to use microphone input, Microphone)
  • PocketSphinx (required only if you need to use the Sphinx recognizer, recognizer_instance.recognize_sphinx)
  • Google API Client Library for Python (required only if you need to use the Google Cloud Speech API, recognizer_instance.recognize_google_cloud)
  • FLAC encoder (required only if the system is not x86-based Windows/Linux/OS X)
  • Vosk (required only if you need to use Vosk API speech recognition recognizer_instance.recognize_vosk)
  • Whisper (required only if you need to use Whisper recognizer_instance.recognize_whisper)
  • openai (required only if you need to use Whisper API speech recognition recognizer_instance.recognize_whisper_api)

The following requirements are optional, but can improve or extend functionality in some situations:

The following sections go over the details of each requirement.

Python

The first software requirement is Python 3.8+. This is required to use the library.

PyAudio (for microphone users)

PyAudio is required if and only if you want to use microphone input (Microphone). PyAudio version 0.2.11+ is required, as earlier versions have known memory management bugs when recording from microphones in certain situations.

If not installed, everything in the library will still work, except attempting to instantiate a Microphone object will raise an AttributeError.

The installation instructions on the PyAudio website are quite good - for convenience, they are summarized below:

  • On Windows, install PyAudio using Pip: execute pip install pyaudio in a terminal.
  • On Debian-derived Linux distributions (like Ubuntu and Mint), install PyAudio using APT: execute sudo apt-get install python-pyaudio python3-pyaudio in a terminal.
    • If the version in the repositories is too old, install the latest release using Pip: execute sudo apt-get install portaudio19-dev python-all-dev python3-all-dev && sudo pip install pyaudio (replace pip with pip3 if using Python 3).
  • On OS X, install PortAudio using Homebrew: brew install portaudio. Then, install PyAudio using Pip: pip install pyaudio.
  • On other POSIX-based systems, install the portaudio19-dev and python-all-dev (or python3-all-dev if using Python 3) packages (or their closest equivalents) using a package manager of your choice, and then install PyAudio using Pip: pip install pyaudio (replace pip with pip3 if using Python 3).

PyAudio wheel packages for common 64-bit Python versions on Windows and Linux are included for convenience, under the third-party/ directory in the repository root. To install, simply run pip install wheel followed by pip install ./third-party/WHEEL_FILENAME (replace pip with pip3 if using Python 3) in the repository root directory.

PocketSphinx-Python (for Sphinx users)

PocketSphinx-Python is required if and only if you want to use the Sphinx recognizer (recognizer_instance.recognize_sphinx).

PocketSphinx-Python wheel packages for 64-bit Python 3.4, and 3.5 on Windows are included for convenience, under the third-party/ directory. To install, simply run pip install wheel followed by pip install ./third-party/WHEEL_FILENAME (replace pip with pip3 if using Python 3) in the SpeechRecognition folder.

On Linux and other POSIX systems (such as OS X), follow the instructions under "Building PocketSphinx-Python from source" in Notes on using PocketSphinx for installation instructions.

Note that the versions available in most package repositories are outdated and will not work with the bundled language data. Using the bundled wheel packages or building from source is recommended.

See Notes on using PocketSphinx for information about installing languages, compiling PocketSphinx, and building language packs from online resources. This document is also included under reference/pocketsphinx.rst.

Vosk (for Vosk users)

Vosk API is required if and only if you want to use Vosk recognizer (recognizer_instance.recognize_vosk).

You can install it with python3 -m pip install vosk.

You also have to install Vosk Models:

Here are models avaiable for download. You have to place them in models folder of your project, like "your-project-folder/models/your-vosk-model"

Google Cloud Speech Library for Python (for Google Cloud Speech API users)

Google Cloud Speech library for Python is required if and only if you want to use the Google Cloud Speech API (recognizer_instance.recognize_google_cloud).

If not installed, everything in the library will still work, except calling recognizer_instance.recognize_google_cloud will raise an RequestError.

According to the official installation instructions, the recommended way to install this is using Pip: execute pip install google-cloud-speech (replace pip with pip3 if using Python 3).

FLAC (for some systems)

A FLAC encoder is required to encode the audio data to send to the API. If using Windows (x86 or x86-64), OS X (Intel Macs only, OS X 10.6 or higher), or Linux (x86 or x86-64), this is already bundled with this library - you do not need to install anything.

Otherwise, ensure that you have the flac command line tool, which is often available through the system package manager. For example, this would usually be sudo apt-get install flac on Debian-derivatives, or brew install flac on OS X with Homebrew.

Whisper (for Whisper users)

Whisper is required if and only if you want to use whisper (recognizer_instance.recognize_whisper).

You can install it with python3 -m pip install git+https://github.com/openai/whisper.git soundfile.

Whisper API (for Whisper API users)

The library openai is required if and only if you want to use Whisper API (recognizer_instance.recognize_whisper_api).

If not installed, everything in the library will still work, except calling recognizer_instance.recognize_whisper_api will raise an RequestError.

You can install it with python3 -m pip install openai.

Troubleshooting

The recognizer tries to recognize speech even when I'm not speaking, or after I'm done speaking.

Try increasing the recognizer_instance.energy_threshold property. This is basically how sensitive the recognizer is to when recognition should start. Higher values mean that it will be less sensitive, which is useful if you are in a loud room.

This value depends entirely on your microphone or audio data. There is no one-size-fits-all value, but good values typically range from 50 to 4000.

Also, check on your microphone volume settings. If it is too sensitive, the microphone may be picking up a lot of ambient noise. If it is too insensitive, the microphone may be rejecting speech as just noise.

The recognizer can't recognize speech right after it starts listening for the first time.

The recognizer_instance.energy_threshold property is probably set to a value that is too high to start off with, and then being adjusted lower automatically by dynamic energy threshold adjustment. Before it is at a good level, the energy threshold is so high that speech is just considered ambient noise.

The solution is to decrease this threshold, or call recognizer_instance.adjust_for_ambient_noise beforehand, which will set the threshold to a good value automatically.

The recognizer doesn't understand my particular language/dialect.

Try setting the recognition language to your language/dialect. To do this, see the documentation for recognizer_instance.recognize_sphinx, recognizer_instance.recognize_google, recognizer_instance.recognize_wit, recognizer_instance.recognize_bing, recognizer_instance.recognize_api, recognizer_instance.recognize_houndify, and recognizer_instance.recognize_ibm.

For example, if your language/dialect is British English, it is better to use "en-GB" as the language rather than "en-US".

The recognizer hangs on recognizer_instance.listen; specifically, when it's calling Microphone.MicrophoneStream.read.

This usually happens when you're using a Raspberry Pi board, which doesn't have audio input capabilities by itself. This causes the default microphone used by PyAudio to simply block when we try to read it. If you happen to be using a Raspberry Pi, you'll need a USB sound card (or USB microphone).

Once you do this, change all instances of Microphone() to Microphone(device_index=MICROPHONE_INDEX), where MICROPHONE_INDEX is the hardware-specific index of the microphone.

To figure out what the value of MICROPHONE_INDEX should be, run the following code:

import speech_recognition as sr
for index, name in enumerate(sr.Microphone.list_microphone_names()):
    print("Microphone with name \"{1}\" found for `Microphone(device_index={0})`".format(index, name))

This will print out something like the following:

Microphone with name "HDA Intel HDMI: 0 (hw:0,3)" found for `Microphone(device_index=0)`
Microphone with name "HDA Intel HDMI: 1 (hw:0,7)" found for `Microphone(device_index=1)`
Microphone with name "HDA Intel HDMI: 2 (hw:0,8)" found for `Microphone(device_index=2)`
Microphone with name "Blue Snowball: USB Audio (hw:1,0)" found for `Microphone(device_index=3)`
Microphone with name "hdmi" found for `Microphone(device_index=4)`
Microphone with name "pulse" found for `Microphone(device_index=5)`
Microphone with name "default" found for `Microphone(device_index=6)`

Now, to use the Snowball microphone, you would change Microphone() to Microphone(device_index=3).

Calling Microphone() gives the error IOError: No Default Input Device Available.

As the error says, the program doesn't know which microphone to use.

To proceed, either use Microphone(device_index=MICROPHONE_INDEX, ...) instead of Microphone(...), or set a default microphone in your OS. You can obtain possible values of MICROPHONE_INDEX using the code in the troubleshooting entry right above this one.

The program doesn't run when compiled with PyInstaller.

As of PyInstaller version 3.0, SpeechRecognition is supported out of the box. If you're getting weird issues when compiling your program using PyInstaller, simply update PyInstaller.

You can easily do this by running pip install --upgrade pyinstaller.

On Ubuntu/Debian, I get annoying output in the terminal saying things like "bt_audio_service_open: [...] Connection refused" and various others.

The "bt_audio_service_open" error means that you have a Bluetooth audio device, but as a physical device is not currently connected, we can't actually use it - if you're not using a Bluetooth microphone, then this can be safely ignored. If you are, and audio isn't working, then double check to make sure your microphone is actually connected. There does not seem to be a simple way to disable these messages.

For errors of the form "ALSA lib [...] Unknown PCM", see this StackOverflow answer. Basically, to get rid of an error of the form "Unknown PCM cards.pcm.rear", simply comment out pcm.rear cards.pcm.rear in /usr/share/alsa/alsa.conf, ~/.asoundrc, and /etc/asound.conf.

For "jack server is not running or cannot be started" or "connect(2) call to /dev/shm/jack-1000/default/jack_0 failed (err=No such file or directory)" or "attempt to connect to server failed", these are caused by ALSA trying to connect to JACK, and can be safely ignored. I'm not aware of any simple way to turn those messages off at this time, besides entirely disabling printing while starting the microphone.

On OS X, I get a ChildProcessError saying that it couldn't find the system FLAC converter, even though it's installed.

Installing FLAC for OS X directly from the source code will not work, since it doesn't correctly add the executables to the search path.

Installing FLAC using Homebrew ensures that the search path is correctly updated. First, ensure you have Homebrew, then run brew install flac to install the necessary files.

Developing

To hack on this library, first make sure you have all the requirements listed in the "Requirements" section.

  • Most of the library code lives in speech_recognition/__init__.py.
  • Examples live under the examples/ directory, and the demo script lives in speech_recognition/__main__.py.
  • The FLAC encoder binaries are in the speech_recognition/ directory.
  • Documentation can be found in the reference/ directory.
  • Third-party libraries, utilities, and reference material are in the third-party/ directory.

To install/reinstall the library locally, run python setup.py install in the project root directory.

Before a release, the version number is bumped in README.rst and speech_recognition/__init__.py. Version tags are then created using git config gpg.program gpg2 && git config user.signingkey DB45F6C431DE7C2DCD99FF7904882258A4063489 && git tag -s VERSION_GOES_HERE -m "Version VERSION_GOES_HERE".

Releases are done by running make-release.sh VERSION_GOES_HERE to build the Python source packages, sign them, and upload them to PyPI.

Testing

To run all the tests:

python -m unittest discover --verbose

Testing is also done automatically by TravisCI, upon every push. To set up the environment for offline/local Travis-like testing on a Debian-like system:

sudo docker run --volume "$(pwd):/speech_recognition" --interactive --tty quay.io/travisci/travis-python:latest /bin/bash
su - travis && cd /speech_recognition
sudo apt-get update && sudo apt-get install swig libpulse-dev
pip install --user pocketsphinx && pip install --user flake8 rstcheck && pip install --user -e .
python -m unittest discover --verbose # run unit tests
python -m flake8 --ignore=E501,E701 speech_recognition tests examples setup.py # ignore errors for long lines and multi-statement lines
python -m rstcheck README.rst reference/*.rst # ensure RST is well-formed

FLAC Executables

The included flac-win32 executable is the official FLAC 1.3.2 32-bit Windows binary.

The included flac-linux-x86 and flac-linux-x86_64 executables are built from the FLAC 1.3.2 source code with Manylinux to ensure that it's compatible with a wide variety of distributions.

The built FLAC executables should be bit-for-bit reproducible. To rebuild them, run the following inside the project directory on a Debian-like system:

# download and extract the FLAC source code
cd third-party
sudo apt-get install --yes docker.io

# build FLAC inside the Manylinux i686 Docker image
tar xf flac-1.3.2.tar.xz
sudo docker run --tty --interactive --rm --volume "$(pwd):/root" quay.io/pypa/manylinux1_i686:latest bash
    cd /root/flac-1.3.2
    ./configure LDFLAGS=-static # compiler flags to make a static build
    make
exit
cp flac-1.3.2/src/flac/flac ../speech_recognition/flac-linux-x86 && sudo rm -rf flac-1.3.2/

# build FLAC inside the Manylinux x86_64 Docker image
tar xf flac-1.3.2.tar.xz
sudo docker run --tty --interactive --rm --volume "$(pwd):/root" quay.io/pypa/manylinux1_x86_64:latest bash
    cd /root/flac-1.3.2
    ./configure LDFLAGS=-static # compiler flags to make a static build
    make
exit
cp flac-1.3.2/src/flac/flac ../speech_recognition/flac-linux-x86_64 && sudo rm -r flac-1.3.2/

The included flac-mac executable is extracted from xACT 2.39, which is a frontend for FLAC 1.3.2 that conveniently includes binaries for all of its encoders. Specifically, it is a copy of xACT 2.39/xACT.app/Contents/Resources/flac in xACT2.39.zip.

Authors

Uberi <[email protected]> (Anthony Zhang)
bobsayshilol
arvindch <[email protected]> (Arvind Chembarpu)
kevinismith <[email protected]> (Kevin Smith)
haas85
DelightRun <[email protected]>
maverickagm
kamushadenes <[email protected]> (Kamus Hadenes)
sbraden <[email protected]> (Sarah Braden)
tb0hdan (Bohdan Turkynewych)
Thynix <[email protected]> (Steve Dougherty)
beeedy <[email protected]> (Broderick Carlin)

Please report bugs and suggestions at the issue tracker!

How to cite this library (APA style):

Zhang, A. (2017). Speech Recognition (Version 3.8) [Software]. Available from https://github.com/Uberi/speech_recognition#readme.

How to cite this library (Chicago style):

Zhang, Anthony. 2017. Speech Recognition (version 3.8).

Also check out the Python Baidu Yuyin API, which is based on an older version of this project, and adds support for Baidu Yuyin. Note that Baidu Yuyin is only available inside China.

License

Copyright 2014-2017 Anthony Zhang (Uberi). The source code for this library is available online at GitHub.

SpeechRecognition is made available under the 3-clause BSD license. See LICENSE.txt in the project's root directory for more information.

For convenience, all the official distributions of SpeechRecognition already include a copy of the necessary copyright notices and licenses. In your project, you can simply say that licensing information for SpeechRecognition can be found within the SpeechRecognition README, and make sure SpeechRecognition is visible to users if they wish to see it.

SpeechRecognition distributes source code, binaries, and language files from CMU Sphinx. These files are BSD-licensed and redistributable as long as copyright notices are correctly retained. See speech_recognition/pocketsphinx-data/*/LICENSE*.txt and third-party/LICENSE-Sphinx.txt for license details for individual parts.

SpeechRecognition distributes source code and binaries from PyAudio. These files are MIT-licensed and redistributable as long as copyright notices are correctly retained. See third-party/LICENSE-PyAudio.txt for license details.

SpeechRecognition distributes binaries from FLAC - speech_recognition/flac-win32.exe, speech_recognition/flac-linux-x86, and speech_recognition/flac-mac. These files are GPLv2-licensed and redistributable, as long as the terms of the GPL are satisfied. The FLAC binaries are an aggregate of separate programs, so these GPL restrictions do not apply to the library or your programs that use the library, only to FLAC itself. See LICENSE-FLAC.txt for license details.

More Repositories

1

Autocomplete

Suggests and completes words as you type! Write faster and more efficiently.
AutoHotkey
180
star
2

Minetest-WorldEdit

The ultimate in-game world editing tool for Minetest! Tons of functionality to help with building, fixing, and more.
Lua
162
star
3

MotionTracking

Blender addon for 3D point reconstruction from multiple cameras.
Python
97
star
4

University-Notes

Notes from various courses at the University of Waterloo.
HTML
77
star
5

Yunit

Super simple testing framework for AutoHotkey.
AutoHotkey
51
star
6

AHK-Scripts

A whole bunch of AutoHotkey scripts from around 2009-2013, including archives from several now long-defunct websites and forum posts.
AutoHotkey
47
star
7

Arduino-CommandParser

Complete command parser library for Arduino-compatibles.
C++
40
star
8

uw-cs350-development-environment

Offline development environment for CS350 coursework, as a Docker image.
HTML
31
star
9

Adwear

ADS FOR THE TERMINAL
Python
23
star
10

Arduino-HardwareBLESerial

An Arduino library for BLE Serial/UART using ArduinoBLE.
C++
18
star
11

robot-agent

Fine-tuned LLaMa2 13B model designed for ReAct-style and Tree-Of-Thoughts style prompting.
Python
17
star
12

ProgressPlatformer

A simple platformer game in AutoHotkey.
AutoHotkey
17
star
13

NicePhoneme

Markov chains and statistical analysis tools for Facebook Chat.
Python
17
star
14

AHK-DB

Database library for AutoHotkey.
AutoHotkey
17
star
15

Canvas-AHK

High level drawing library for AutoHotkey.
AutoHotkey
15
star
16

The-Mippits

Flexible MIPS interpreter with support for the CS241 MIPS instruction subset.
Python
12
star
17

Parallelist

A simple parallelism library for AutoHotkey.
AutoHotkey
10
star
18

botty-bot-bot-bot

Personable chatbot for Slack using the Slack Realtime Messaging API.
Python
10
star
19

Autonomy

A programming language inspired by AutoHotkey.
AutoHotkey
9
star
20

COURSERATOR3000

Schedule creator for University of Waterloo courses.
JavaScript
9
star
21

biplane

Minimal, fast, robust HTTP server library for Python/CircuitPython that uses non-blocking concurrent I/O even when asyncio isn't available!
Python
9
star
22

MeseconEdit

A 2D editor and simulator for Mesecons.
AutoHotkey
8
star
23

Goosenstein

Goosenstein is a sidescroller 2D game, made for HackWATERLOO 2014.
Lua
8
star
24

setup-machine

Dotfiles for everything else.
Shell
7
star
25

devenv

A Docker image and wrapper script set up for full-stack web dev using Python/Javascript/Go, various databases, and various cloud providers (AWS, GCP).
Dockerfile
7
star
26

polonium

Facebook Chat. Refined.
HTML
5
star
27

AutoHotkey.net-Website-Generator

Automatically generates a website for AutoHotkey.net.
AutoHotkey
5
star
28

AHK-Benchmarks

Benchmarks for the AutoHotkey language.
AutoHotkey
5
star
29

LegitimatelyTerrible

Legitimately terrible.
Java
4
star
30

DWM3001CDK-demo-firmware

A heavily-modified version of the official firmware that runs on the Qorvo DWM3001CDK, cleaned up and simplified for easier customization.
C
4
star
31

Classifier

A document filtering class implementing a Fisher classifier.
AutoHotkey
4
star
32

Proprietary-Hat-Systems-Ltd.

UofTHacks 2015 entry - "NavHat" by Proprietary Hat Systems Ltd.
Python
4
star
33

femtosync

The tiniest file sync tool. Roughly equivalent to 1e-15 ordinary file sync tools.
Python
4
star
34

Ludum-Dare-24

Ludum Dare 24 game entry!
AutoHotkey
4
star
35

PL101

PL101 project files.
JavaScript
3
star
36

no.js

There's no JS.
3
star
37

uberi.github.io

Personal blog and homepage of Anthony Zhang.
JavaScript
3
star
38

Uncombinator

SE Hack Day project - break combination padlocks using the power of friendship and statistics.
Python
3
star
39

anglr

Planar angle mathematics library for Python
Python
3
star
40

Fraction.ahk

Complete fractional math library.
AutoHotkey
3
star
41

PebbleSOS

stuff.
Objective-C
3
star
42

ProjectExo

Design, CAD files, schematics, and building instructions for Project Exo. In progress.
C
2
star
43

hard-boiled-duct-eggs

Made with πŸ’¦ for TerribleHack X!
Python
2
star
44

Anana

The least ergonomic MIDI keyboard ever invented.
HTML
2
star
45

budget-o-matic

Balance the books, programmer style.
Python
2
star
46

uberi.mesecons.net

The source code for The Mesecons Laboratory.
CSS
2
star
47

MineTest-API

MineTest modding API - developer documentation.
2
star
48

SeventhSense

SeventhSense is a simple armband that uses targeted vibrations to give the wearer the ability to sense the direction and magnitude of magnetic fields.
Assembly
2
star
49

LightPainter

HackWestern 2015 stuff.
Arduino
2
star
50

Picrux

Fast, light, elegant, dead simple reminders utility.
JavaScript
2
star
51

truly-global-variables

Global variables that actually span the globe. Made with πŸ’© for TerribleHack IV!
Python
1
star
52

circleizer

Circleizes images into many circles.
1
star
53

keytar

Hack the North 2015!
Python
1
star
54

Turret

A conspicuous lack of candy permeates the air.
Python
1
star
55

DWM3001C-starter-firmware

A firmware for the Qorvo DWM3001C with comprehensive examples for all of the module's UWB and ranging functionality, and developer tooling that makes working with the firmware much easier than the official tooling.
C
1
star
56

NoonPacificPlaylistDownloader

Downloader for Noon Pacific playlists.
Python
1
star
57

Raydium-AHK

A minimal wrapper for the Raydium Game Engine.
AutoHotkey
1
star