• Stars
    star
    377
  • Rank 113,535 (Top 3 %)
  • Language
    Python
  • License
    GNU Lesser Genera...
  • Created over 7 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Speech recognition framework allowing powerful Python-based scripting and extension of Dragon NaturallySpeaking (DNS), Windows Speech Recognition (WSR), Kaldi and CMU Pocket Sphinx

Dragonfly

Docs Status Join Gitter chat Join Matrix chat

Introduction

Dragonfly is a speech recognition framework for Python that makes it convenient to create custom commands to use with speech recognition software. It was written to make it very easy for Python macros, scripts, and applications to interface with speech recognition engines. Its design allows speech commands and grammar objects to be treated as first-class Python objects.

Dragonfly can be used for general programming by voice. It is flexible enough to allow programming in any language, not just Python. It can also be used for speech-enabling applications, automating computer activities and dictating prose.

Dragonfly contains its own powerful framework for defining and executing actions. It includes actions for text input and key-stroke simulation. This framework is cross-platform, working on Windows, macOS and Linux (X11 only). See the actions sub-package documentation for more information, including code examples.

This project is a fork of the original t4ngo/dragonfly project.

Dragonfly currently supports the following speech recognition engines:

  • Dragon, a product of Nuance. All versions up to 15 (the latest) should be supported. Home, Professional Individual and previous similar editions of Dragon are supported. Other editions may work too
  • Windows Speech Recognition (WSR), included with Microsoft Windows Vista, Windows 7+, and freely available for Windows XP
  • Kaldi, open source (AGPL) and multi-platform.
  • CMU Pocket Sphinx, open source and multi-platform.

Documentation and FAQ

Dragonfly's documentation is available online at Read the Docs. The changes in each release are listed in the project's changelog. Dragonfly's FAQ is available in the documentation here. There are also a number of Dragonfly-related questions on Stackoverflow, although many of them are related to issues resolved in the latest version of Dragonfly.

CompoundRule Usage example

A very simple example of Dragonfly usage is to create a static voice command with a callback that will be called when the command is spoken. This is done as follows:

from dragonfly import Grammar, CompoundRule

# Voice command rule combining spoken form and recognition processing.
class ExampleRule(CompoundRule):
    spec = "do something computer"                  # Spoken form of command.
    def _process_recognition(self, node, extras):   # Callback when command is spoken.
        print("Voice command spoken.")

# Create a grammar which contains and loads the command rule.
grammar = Grammar("example grammar")                # Create a grammar to contain the command rule.
grammar.add_rule(ExampleRule())                     # Add the command rule to the grammar.
grammar.load()                                      # Load the grammar.

To use this example, save it in a command module in your module loader directory or Natlink user directory, load it and then say do something computer. If the speech recognition engine recognized the command, then Voice command spoken. will be printed in the Natlink messages window. If you're not using Dragon, then it will be printed into the console window.

MappingRule usage example

A more common use of Dragonfly is the MappingRule class, which allows defining multiple voice commands. The following example is a simple grammar to be used when Notepad is the foreground window:

from dragonfly import (Grammar, AppContext, MappingRule, Dictation,
                       Key, Text)

# Voice command rule combining spoken forms and action execution.
class NotepadRule(MappingRule):
    # Define the commands and the actions they execute.
    mapping = {
        "save [file]":            Key("c-s"),
        "save [file] as":         Key("a-f, a/20"),
        "save [file] as <text>":  Key("a-f, a/20") + Text("%(text)s"),
        "find <text>":            Key("c-f/20") + Text("%(text)s\n"),
    }

    # Define the extras list of Dragonfly elements which are available
    # to be used in mapping specs and actions.
    extras = [
        Dictation("text")
    ]


# Create the grammar and the context under which it'll be active.
context = AppContext(executable="notepad")
grammar = Grammar("Notepad example", context=context)

# Add the command rule to the grammar and load it.
grammar.add_rule(NotepadRule())
grammar.load()

To use this example, save it in a command module in your module loader directory or Natlink user directory, load it, open a Notepad window and then say one of mapping commands. For example, saying save or save file will cause the control and S keys to be pressed.

The example aboves don't show any of Dragonfly's exciting features, such as dynamic speech elements. To learn more about these, please take a look at Dragonfly's online docs.

Installation

Dragonfly is a Python package. It can be installed as dragonfly using pip:

pip install dragonfly

If you are installing this on Linux, you will also need to install the wmctrl, xdotool and xsel programs.

Please note that, on Linux, Dragonfly is only fully functional in an X11 session. Input action classes, application contexts and the Window class will not be functional under Wayland. It is recommended that Wayland users switch to X11, Windows or macOS.

Dragonfly can also be installed by cloning this repository or downloading it from the releases page and running the following (or similar) command in the project's root directory:

python setup.py install

If pip fails to install dragonfly or any of its required or extra dependencies, then you may need to upgrade pip with the following command:

pip install --upgrade pip

Speech recognition engine back-ends

Installation instructions, requirements and API references for each Dragonfly speech recognition engine are documented separately on the following pages:

Existing command modules

The related resources page of Dragonfly's documentation has a section on command modules which lists various sources.