• Stars
    star
    38
  • Rank 706,870 (Top 14 %)
  • Language
    JavaScript
  • License
    MIT License
  • Created over 6 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Polyfill Web Speech API with Cognitive Services Bing Speech for both speech-to-text and text-to-speech service.

web-speech-cognitive-services

Web Speech API adapter to use Cognitive Services Speech Services for both speech-to-text and text-to-speech service.

npm version Build Status

Description

Speech technologies enables a lot of interesting scenarios, including Intelligent Personal Assistant and provide alternative inputs for assistive technologies.

Although W3C standardized speech technologies in browser, speech-to-text and text-to-speech support are still scarce. However, cloud-based speech technologies are very mature.

This polyfill provides W3C Speech Recognition and Speech Synthesis API in browser by using Azure Cognitive Services Speech Services. This will bring speech technologies to all modern first-party browsers available on both PC and mobile platforms.

Demo

Before getting started, please obtain a Cognitive Services subscription key from your Azure subscription.

Try out our demo at https://compulim.github.io/web-speech-cognitive-services. If you don't have a subscription key, you can still try out our demo in a speech-supported browser.

We use react-dictate-button and react-say to quickly setup the playground.

Browser requirements

Speech recognition requires WebRTC API and the page must hosted thru HTTPS or localhost. Although iOS 12 support WebRTC, native apps using WKWebView do not support WebRTC.

Special requirement for Safari

Speech synthesis requires Web Audio API. For Safari, user gesture (click or tap) is required to play audio clips using Web Audio API. To ready the Web Audio API to use without user gesture, you can synthesize an empty string, which will not trigger any network calls but playing an empty hardcoded short audio clip. If you already have a "primed" AudioContext object, you can also pass it as an option.

How to use

There are two ways to use this package:

  1. Using <script> to load the bundle
  2. Install from NPM

Using <script> to load the bundle

To use the ponyfill directly in HTML, you can use our published bundle from unpkg.

In the sample below, we use the bundle to perform text-to-speech with a voice named "Aria24kRUS".

<!DOCTYPE html>
<html lang="en-US">
  <head>
    <script src="https://unpkg.com/web-speech-cognitive-services/umd/web-speech-cognitive-services.production.min.js"></script>
  </head>
  <body>
    <script>
      const { speechSynthesis, SpeechSynthesisUtterance } = window.WebSpeechCognitiveServices.create({
        credentials: {
          region: 'westus',
          subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
        }
      });

      speechSynthesis.addEventListener('voiceschanged', () => {
        const voices = speechSynthesis.getVoices();
        const utterance = new SpeechSynthesisUtterance('Hello, World!');

        utterance.voice = voices.find(voice => /Aria24kRUS/u.test(voice.name));

        speechSynthesis.speak(utterance);
      });
    </script>
  </body>
</html>

We do not host the bundle. You should always use Subresource Integrity to protect bundle integrity when loading from a third-party CDN.

The voiceschanged event come shortly after you created the ponyfill. You will need to wait until the event arrived before able to choose a voice for your utterance.

Install from NPM

For production build, run npm install web-speech-cognitive-services.

For development build, run npm install web-speech-cognitive-services@master.

Since Speech Services SDK is not on NPM yet, we will bundle the SDK inside this package for now. When Speech Services SDK release on NPM, we will define it as a peer dependency.

Polyfilling vs. ponyfilling

In JavaScript, polyfill is a technique to bring newer features to older environment. Ponyfill is very similar, but instead polluting the environment by default, we prefer to let the developer to choose what they want. This article talks about polyfill vs. ponyfill.

In this package, we prefer ponyfill because it do not pollute the hosting environment. You are also free to mix-and-match multiple speech recognition engines under a single environment.

Options

The following list all options supported by the adapter.

Name and type Default value Description
audioConfig: AudioConfig fromDefaultMicrophoneInput() AudioConfig object to use with speech recognition. Please refer to this article for details on selecting different audio devices.
audioContext: AudioContext undefined The audio context is synthesizing speech on. If this is undefined, the AudioContext object will be created on first synthesis.
credentials: (
  ICredentials ||
  Promise<ICredentials> ||
  () => ICredentials ||
  () => Promise<ICredentials>
)

ICredentials: {
  authorizationToken: string,
  region: string
} || {
  region: string,
  subscriptionKey: string
} || {
  authorizationToken: string,
  customVoiceHostname?: string,
  speechRecognitionHostname: string,
  speechSynthesisHostname: string
} || {
  customVoiceHostname?: string,
  speechRecognitionHostname: string,
  speechSynthesisHostname: string,
  subscriptionKey: string
}
(Required) Credentials (including Azure region) from Cognitive Services. Please refer to this article to obtain an authorization token.

Subscription key is not recommended for production use as it will be leaked in the browser.

For sovereign cloud such as Azure Government (United States) and Azure China, instead of specifying region, please specify speechRecongitionHost and speechSynthesisHostname instead. You can find the sovereign cloud connection parameters from this article.
enableTelemetry undefined Pass-through option to enable or disable telemetry for Speech SDK recognizer as outlined in Speech SDK. This adapter does not collect any telemetry.

By default, Speech SDK will collect telemetry unless this is set to false.
looseEvents: boolean false Specifies if the event order should strictly follow observed browser behavior (false), or loosened behavior (true). Regardless of the option, both behaviors conform with W3C specifications.

You can read more about this option in event order section.
ponyfill.AudioContext: AudioContext window.AudioContext ||
window.webkitAudioContext
Ponyfill for Web Audio API.

Currently, only Web Audio API can be ponyfilled. We may expand to WebRTC for audio recording in the future.
referenceGrammars: string[] undefined Reference grammar IDs to send for speech recognition.
speechRecognitionEndpointId: string undefined Endpoint ID for Custom Speech service.
speechSynthesisDeploymentId: string undefined Deployment ID for Custom Voice service.

When you are using Custom Voice, you will need to specify your voice model name through SpeechSynthesisVoice.voiceURI. Please refer to the "Custom Voice support" section for details.
speechSynthesisOutputFormat: string "audio-24khz-160kbitrate-mono-mp3" Audio format for speech synthesis. Please refer to this article for list of supported formats.
textNormalization: string "display" Supported text normalization options:

  • "display"
  • "itn" (inverse text normalization)
  • "lexical"
  • "maskeditn" (masked ITN)

Setting up for sovereign clouds

You can use the adapter to connect to sovereign clouds, including Azure Government (United States) and Microsoft Azure China.

Please refer to this article on limitations when using Cognitive Services Speech Services on sovereign clouds.

Azure Government (United States)

createPonyfill({
  credentials: {
    authorizationToken: 'YOUR_AUTHORIZATION_TOKEN',
    speechRecognitionHostname: 'virginia.stt.speech.azure.us',
    speechSynthesisHostname: 'virginia.tts.speech.azure.us'
  }
});

Microsoft Azure China

createPonyfill({
  credentials: {
    authorizationToken: 'YOUR_AUTHORIZATION_TOKEN',
    speechRecognitionHostname: 'chinaeast2.stt.speech.azure.cn',
    speechSynthesisHostname: 'chinaeast2.tts.speech.azure.cn'
  }
});

Code snippets

For readability, we omitted the async function in all code snippets. To run the code, you will need to wrap the code using an async function.

Speech recognition (speech-to-text)

import { createSpeechRecognitionPonyfill } from 'web-speech-cognitive-services/lib/SpeechServices/SpeechToText';

const {
  SpeechRecognition
} = await createSpeechRecognitionPonyfill({
  credentials: {
    region: 'westus',
    subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
  }
});

const recognition = new SpeechRecognition();

recognition.interimResults = true;
recognition.lang = 'en-US';

recognition.onresult = ({ results }) => {
  console.log(results);
};

recognition.start();

Note: most browsers requires HTTPS or localhost for WebRTC.

Integrating with React

You can use react-dictate-button to integrate speech recognition functionality to your React app.

import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';
import DictateButton from 'react-dictate-button';

const {
  SpeechGrammarList,
  SpeechRecognition
} = await createPonyfill({
  credentials: {
    region: 'westus',
    subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
  }
});

export default props =>
  <DictateButton
    onDictate={ ({ result }) => alert(result.transcript) }
    speechGrammarList={ SpeechGrammarList }
    speechRecognition={ SpeechRecognition }
  >
    Start dictation
  </DictateButton>

Speech synthesis (text-to-speech)

import { createSpeechSynthesisPonyfill } from 'web-speech-cognitive-services/lib/SpeechServices/TextToSpeech';

const {
  speechSynthesis,
  SpeechSynthesisUtterance
} = await createSpeechSynthesisPonyfill({
  credentials: {
    region: 'westus',
    subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
  }
});

speechSynthesis.addEventListener('voiceschanged', () => {
  const voices = speechSynthesis.getVoices();
  const utterance = new SpeechSynthesisUtterance('Hello, World!');

  utterance.voice = voices.find(voice => /Aria24kRUS/u.test(voice.name));

  speechSynthesis.speak(utterance);
});

Note: speechSynthesis is camel-casing because it is an instance.

List of supported regions can be found in this article.

pitch, rate, voice, and volume are supported. Only onstart, onerror, and onend events are supported.

Integrating with React

You can use react-say to integrate speech synthesis functionality to your React app.

import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';
import React, { useEffect, useState } from 'react';
import Say from 'react-say';

export default () => {
  const [ponyfill, setPonyfill] = useState();

  useEffect(async () => {
    setPonyfill(await createPonyfill({
      credentials: {
        region: 'westus',
        subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
      }
    }));
  }, [setPonyfill]);

  return (
    ponyfill &&
      <Say
        speechSynthesis={ ponyfill.speechSynthesis }
        speechSynthesisUtterance={ ponyfill.SpeechSynthesisUtterance }
        text="Hello, World!"
      />
  );
};

Using authorization token

Instead of exposing subscription key on the browser, we strongly recommend using authorization token.

import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';

const ponyfill = await createPonyfill({
  credentials: {
    authorizationToken: 'YOUR_AUTHORIZATION_TOKEN',
    region: 'westus'
  }
});

You can also provide an async function that will fetch the authorization token and Azure region on-demand. You should cache the authorization token for subsequent request. For simplicity of this code snippets, we are not caching the result.

import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';

const ponyfill = await createPonyfill({
  credentials: () => fetch('https://example.com/your-token').then(res => ({
    authorizationToken: res.text(),
    region: 'westus'
  }))
});

List of supported regions can be found in this article.

Lexical and ITN support

Lexical and ITN support is unique in Cognitive Services Speech Services. Our adapter added additional properties transcriptITN, transcriptLexical, and transcriptMaskedITN to surface the result, in addition to transcript and confidence.

Biasing towards some words for recognition

In some cases, you may want the speech recognition engine to be biased towards "Bellevue" because it is not trivial for the engine to recognize between "Bellevue", "Bellview" and "Bellvue" (without "e"). By giving a list of words, teh speech recognition engine will be more biased to your choice of words.

Since Cognitive Services does not works with weighted grammars, we built another SpeechGrammarList to better fit the scenario.

import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';

const {
  SpeechGrammarList,
  SpeechRecognition
} = await createPonyfill({
  credentials: {
    region: 'westus',
    subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
  }
});

const recognition = new SpeechRecognition();

recognition.grammars = new SpeechGrammarList();
recognition.grammars.phrases = ['Tuen Mun', 'Yuen Long'];

recognition.onresult = ({ results }) => {
  console.log(results);
};

recognition.start();

Custom Speech support

Please refer to "What is Custom Speech?" for tutorial on creating your first Custom Speech model.

To use custom speech for speech recognition, you need to pass the endpoint ID while creating the ponyfill.

import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';

const ponyfill = await createPonyfill({
  credentials: {
    region: 'westus',
    subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
  },
  speechRecognitionEndpointId: '12345678-1234-5678-abcd-12345678abcd',
});

Custom Voice support

Please refer to "Get started with Custom Voice" for tutorial on creating your first Custom Voice model.

To use Custom Voice for speech synthesis, you need to pass the deployment ID while creating the ponyfill, and pass the voice model name as voice URI.

import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';

const ponyfill = await createPonyfill({
  credentials: {
    region: 'westus',
    subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
  },
  speechSynthesisDeploymentId: '12345678-1234-5678-abcd-12345678abcd',
});

const { speechSynthesis, SpeechSynthesisUtterance } = ponyfill;

const utterance = new SpeechSynthesisUtterance('Hello, World!');

utterance.voice = { voiceURI: 'your-model-name' };

await speechSynthesis.speak(utterance);

Event order

According to W3C specifications, the result event can be fire at any time after audiostart event.

In continuous mode, finalized result event will be sent as early as possible. But in non-continuous mode, we observed browsers send finalized result event just before audioend, instead of as early as possible.

By default, we follow event order observed from browsers (a.k.a. strict event order). For a speech recognition in non-continuous mode and with interims, the observed event order will be:

  1. start
  2. audiostart
  3. soundstart
  4. speechstart
  5. result (these are interim results, with isFinal property set to false)
  6. speechend
  7. soundend
  8. audioend
  9. result (with isFinal property set to true)
  10. end

You can loosen event order by setting looseEvents to true. For the same scenario, the event order will become:

  1. start
  2. audiostart
  3. soundstart
  4. speechstart
  5. result (these are interim results, with isFinal property set to false)
  6. result (with isFinal property set to true)
  7. speechend
  8. soundend
  9. audioend
  10. end

For error events (abort, "no-speech" or other errors), we always sent it just before the last end event.

In some cases, loosening event order may improve recognition performance. This will not break conformance to W3C standard.

Test matrix

For detailed test matrix, please refer to SPEC-RECOGNITION.md or SPEC-SYNTHESIS.md.

Known issues

  • Speech recognition
    • Interim results do not return confidence, final result do have confidence
      • We always return 0.5 for interim results
    • Cognitive Services support grammar list but not in JSGF format, more work to be done in this area
      • Although Google Chrome support grammar list, it seems the grammar list is not used at all
  • Speech synthesis
    • onboundary, onmark, onpause, and onresume are not supported/fired
    • pause will pause immediately and do not pause on word breaks due to lack of boundary

Roadmap

  • Speech recognition
    • Add tests for lifecycle events
    • Support stop() and abort() function
    • Add dynamic phrases
    • Add reference grammars
    • Add continuous mode
    • Investigate support of Opus (OGG) encoding
    • Support custom speech
    • Support ITN, masked ITN, and lexical output
  • Speech synthesis
    • Event: add pause/resume support
    • Properties: add paused/pending/speaking support
    • Support custom voice fonts

Contributions

Like us? Star us.

Want to make it better? File us an issue.

Don't like something you see? Submit a pull request.

More Repositories

1

react-scroll-to-bottom

React container that will auto scroll to bottom
JavaScript
151
star
2

node-opencc

OpenCC implementation for pure Node.js
JavaScript
56
star
3

vscode-clock

Displays clock in status bar in Visual Studio Code
JavaScript
27
star
4

docker-msbuild

MSBuild 2017 for Windows Container
27
star
5

react-dictate-button

A button to start dictation using Web Speech API.
JavaScript
23
star
6

lock-walker

Visualize dependency tree in package-lock.json
JavaScript
23
star
7

vscode-mocha

Runs Mocha tests inside Visual Studio Code
JavaScript
21
star
8

react-sanitized-html

A React component that will sanitize user-inputted HTML code, using the popular sanitize-html package
JavaScript
20
star
9

react-say

A React component that synthesis text into speech using Web Speech API.
JavaScript
19
star
10

vscode-chinese-translation

Translates between Traditional Chinese and Simplified Chinese in Visual Studio Code
JavaScript
17
star
11

vscode-express

Hosts current workspace in Express
JavaScript
16
star
12

vscode-indent-4to2

Visual Studio Code extension to convert indentation of tab or 4 spaces into 2 spaces
JavaScript
16
star
13

BotFramework-MockBot

MockBot for testing all features of Web Chat
TypeScript
15
star
14

vscode-closetag

Quickly close last opened HTML/XML tag in Visual Studio Code
JavaScript
14
star
15

vscode-qrcode

Visual Studio Code extension for generating QR code
JavaScript
13
star
16

azure-storage-fs

A drop-in "fs" replacement for accessing Azure Storage with Node.js "fs" API
JavaScript
10
star
17

acme-http-01-azure-key-vault-middleware

JavaScript
9
star
18

redux-websocket-bridge

Bridge WebSocket messages and Redux actions
JavaScript
9
star
19

docker-bot

Template for developing bot with Docker and Bot Framework Emulator
JavaScript
9
star
20

generator-azure-web-app

Web site template with React, Webpack, hot module replacement, and Express. MSDeploy to prepare deployment for Azure Web Apps.
JavaScript
9
star
21

vscode-ipaddress

Shows and inserts IP address to Visual Studio Code
JavaScript
8
star
22

version-from-git

Bump package.json version to pre-release tagged with Git branch and short commit hash, 1.0.0-master.1a2b3c4
JavaScript
8
star
23

azure-webapp-for-containers-template

Template for creating an Azure Web App for Containers with Travis CI
Shell
7
star
24

react-drop-to-upload

A simple React component for "drop-to-upload" feature
JavaScript
6
star
25

docker-nanoserver-node

Windows Server 2016 Nano Server with Node.js
5
star
26

uncork

SSH-over-HTTPS proxy that works like Unix "corkscrew"
JavaScript
5
star
27

azure-blob-ftpd

A FTP server for Azure Storage Blob
JavaScript
5
star
28

webchat-loader

TypeScript
4
star
29

todobot

JavaScript
4
star
30

gm-image-tile

Creates image tiles with Graphicsmagick in Node.js
JavaScript
4
star
31

promise-critical-section

Critical section in Promise fashion
JavaScript
3
star
32

react-wrap-with

Wrap a React component in another component by Higher-Order Component.
TypeScript
3
star
33

botframework-directlinejs-speech

JavaScript
3
star
34

simple-update-in

Immutable update-in with zero dependencies
JavaScript
3
star
35

event-as-promise

Handle continuous steam of events in Promise fashion
JavaScript
3
star
36

template-create-react-app

Set up scaffolding of create-react-app using GitHub Actions
3
star
37

memoize-one-with-dispose

A memoization function with cache of one and disposing function
JavaScript
3
star
38

adaptive-card-loader

Quickly preview an Adaptive Card
JavaScript
3
star
39

vscode-dictionary

Visual Studio Code extension to lookup text in dictionary and thesaurus
JavaScript
3
star
40

template-react-esbuild

Template to build a vanilla React app using ESBuild with preconfigured GitHub workflows.
TypeScript
3
star
41

react-augmentation

DOM augmentation operators to simplify UI code and promote data-driven DOM hierarchy
JavaScript
2
star
42

event-target-shim-es5

ES5 wrapper for event-target-shim
JavaScript
2
star
43

react-scripts-ts

JavaScript
2
star
44

auto-reset-event

An acquire-release one semantic implemented in Promise.
JavaScript
2
star
45

has-resolved

Checks if a Promise is resolved or rejected asynchronously
JavaScript
2
star
46

rpi-pihole-cloudflared

Shell
2
star
47

opencc-loader

Webpack loader for converting Traditional Chinese into Simplified Chinese (and vice versa).
JavaScript
2
star
48

react-hooks-component-under-babel-standalone

React Hooks component library running under Babel Standalone
JavaScript
2
star
49

use-ref-from

React.useRef with an immediate setter and read-only value.
JavaScript
2
star
50

bookstore

A data storage pattern for Azure Blob Storage
JavaScript
2
star
51

on-error-resume-next

Run a function, synchronously or asynchronously, and ignore errors.
JavaScript
2
star
52

experiment-lerna-esbuild

JavaScript
2
star
53

react-component-template

My React component template
JavaScript
2
star
54

react-chain-of-responsibility

Renders family of UI components with strategy and middleware pattern.
TypeScript
2
star
55

use-memo-map

Memoizes calls to array map function similar to React.useMemo.
TypeScript
2
star
56

autohotkey-boss-fs-1-wl

AutoHotKey scripts for BOSS FS-1-WL Wireless Footswitch
AutoHotkey
2
star
57

promise-race-map

Promise.race implementation to works with a JavaScript map
JavaScript
2
star
58

commit-publish

JavaScript
1
star
59

messageport-websocket

Turns HTML MessagePort into HTML WebSocket
JavaScript
1
star
60

electron-ipcmain-messageport

Turns Electron IPCMain into HTML MessagePort
JavaScript
1
star
61

experiment-webchat-react17

JavaScript
1
star
62

use-read-alert

JavaScript
1
star
63

experiment-shim-dljs

Experiment to show how to build a shim of DirectLineJS
JavaScript
1
star
64

use-propagate

Propagates an event to multiple subscribers using React hooks.
TypeScript
1
star
65

botframework-virtual-assistant-demo

Virtual Assistant demo for Bot Framework
JavaScript
1
star
66

BotFramework-Samples-Template

JavaScript
1
star
67

gulp-remotebuild

Gulp plug-in to build Cordova iOS app remotely using Visual Studio Tools for Apache Cordova
JavaScript
1
star
68

travis-encrypt

Simple React web app to encrypt secret for Travis CI
JavaScript
1
star
69

docker-iisnode

Dockerfile for iisnode
Batchfile
1
star
70

react-accent-color

Accent color palette for React components
JavaScript
1
star
71

electron-ipcrenderer-messageport

Turns Electron IPCRenderer into MessagePort
JavaScript
1
star
72

experiment-dova-rx-bot-offline

TypeScript
1
star
73

template-mock-bot

Echo bot with built-in token server
TypeScript
1
star
74

react-scrolling-background

TypeScript
1
star
75

experiment-webchat-dialpad

TypeScript
1
star
76

generator-babel-typescript

TypeScript package transpiled with Babel using babel-preset-typescript and works with Jest
JavaScript
1
star
77

botframework-webchat-chat-adapter

TypeScript
1
star
78

azure-iot-alexa-smart-home-skill

Alexa Smart Home Skill for devices on Azure IoT
JavaScript
1
star
79

experiment-github-issue-template

1
star
80

crawl-fs

Recursively crawl the content of a folder
JavaScript
1
star
81

botframework-virtual-assistant-template

JavaScript
1
star
82

experiment-how-to-use-hooks

TypeScript
1
star
83

template-webchat-customization

TypeScript
1
star
84

experiment-websocket-idle

JavaScript
1
star
85

experiment-fluent-gradient-icon

HTML
1
star
86

redux-saga-demo

JavaScript
1
star
87

config

Keep my favorites set of configuration
1
star
88

websocket-util

Helper utilities for multiple Web Socket packages
JavaScript
1
star
89

experiment-safari-auto-scroll-keyboard

HTML
1
star
90

react-accent-color-testbed

Testbed for react-accent-color
JavaScript
1
star
91

experiment-treeshake-directlinejs

Experiment on tree-shaking botframework-directlinejs
JavaScript
1
star
92

p-defer-es5

ES5 version of p-defer
JavaScript
1
star
93

childprocess-messageport

Turns ChildProcess IPC into MessagePort
JavaScript
1
star
94

childprocess-websocket

Turns ChildProcess IPC into WebSocket
JavaScript
1
star
95

codeaholics-iot-demo

Codeaholics meetup: Alexa + Azure IoT + Arduino
Arduino
1
star
96

compulim-experiment-webchat-markdown-render

TypeScript
1
star
97

iter-fest

A collection of utilities for iterations.
TypeScript
1
star
98

experiment-copilot-studio-canvas-with-fluent

HTML
1
star
99

experiment-injected-styles

TypeScript
1
star
100

app-qr-code-share-target

Shares links/texts as QR Code
CSS
1
star