Official Python SDK for Deepgram. Power your apps with world-class speech and Language AI models.
- Deepgram Python SDK
- Documentation
- Getting an API Key
- Requirements
- Installation
- Quickstarts
- Examples
- Development and Contributing
- Getting Help
You can learn more about the Deepgram API at developers.deepgram.com.
🔑 To access the Deepgram API you will need a free Deepgram API Key.
Python (version ^3.10)
To install the latest version available (which will guarantee change over time):
pip install deepgram-sdk
If you are going to write an application to consume this SDK, it's highly recommended and a programming staple to pin to at least a major version of an SDK (ie ==2.*
) or with due diligence, to a minor and/or specific version (ie ==2.1.*
or ==2.12.0
, respectively). If you are unfamiliar with semantic versioning or semver, it's a must-read.
In a requirements.txt
file, pinning to a major (or minor) version, like if you want to stick to using the SDK v2.12.0
release, that can be done like this:
deepgram-sdk==2.*
Or using pip:
pip install deepgram-sdk==2.*
Pinning to a specific version can be done like this in a requirements.txt
file:
deepgram-sdk==2.12.0
Or using pip:
pip install deepgram-sdk==2.12.0
We guarantee that major interfaces will not break in a given major semver (ie 2.*
release). However, all bets are off moving from a 2.*
to 3.*
major release. This follows standard semver best-practices.
This SDK aims to reduce complexity and abtract/hide some internal Deepgram details that clients shouldn't need to know about. However you can still tweak options and settings if you need.
You can find a walkthrough on our documentation site. Transcribing Pre-Recorded Audio can be done using the following sample code:
AUDIO_URL = {
"url": "https://static.deepgram.com/examples/Bueller-Life-moves-pretty-fast.wav"
}
# STEP 1 Create a Deepgram client using the API key from environment variables
deepgram: DeepgramClient = DeepgramClient("", ClientOptionsFromEnv())
# STEP 2 Call the transcribe_url method on the prerecorded class
options: PrerecordedOptions = PrerecordedOptions(
model="nova-2",
smart_format=True,
)
response = deepgram.listen.prerecorded.v("1").transcribe_url(AUDIO_URL, options)
print(f"response: {response}\n\n")
You can find a walkthrough on our documentation site. Transcribing Live Audio can be done using the following sample code:
deepgram: DeepgramClient = DeepgramClient()
dg_connection = deepgram.listen.live.v("1")
def on_open(self, open, **kwargs):
print(f"\n\n{open}\n\n")
def on_message(self, result, **kwargs):
sentence = result.channel.alternatives[0].transcript
if len(sentence) == 0:
return
print(f"speaker: {sentence}")
def on_metadata(self, metadata, **kwargs):
print(f"\n\n{metadata}\n\n")
def on_speech_started(self, speech_started, **kwargs):
print(f"\n\n{speech_started}\n\n")
def on_utterance_end(self, utterance_end, **kwargs):
print(f"\n\n{utterance_end}\n\n")
def on_error(self, error, **kwargs):
print(f"\n\n{error}\n\n")
def on_close(self, close, **kwargs):
print(f"\n\n{close}\n\n")
dg_connection.on(LiveTranscriptionEvents.Open, on_open)
dg_connection.on(LiveTranscriptionEvents.Transcript, on_message)
dg_connection.on(LiveTranscriptionEvents.Metadata, on_metadata)
dg_connection.on(LiveTranscriptionEvents.SpeechStarted, on_speech_started)
dg_connection.on(LiveTranscriptionEvents.UtteranceEnd, on_utterance_end)
dg_connection.on(LiveTranscriptionEvents.Error, on_error)
dg_connection.on(LiveTranscriptionEvents.Close, on_close)
options: LiveOptions = LiveOptions(
model="nova-2",
punctuate=True,
language="en-US",
encoding="linear16",
channels=1,
sample_rate=16000,
# To get UtteranceEnd, the following must be set:
interim_results=True,
utterance_end_ms="1000",
vad_events=True,
)
dg_connection.start(options)
# create microphone
microphone = Microphone(dg_connection.send)
# start microphone
microphone.start()
# wait until finished
input("Press Enter to stop recording...\n\n")
# Wait for the microphone to close
microphone.finish()
# Indicate that we've finished
dg_connection.finish()
print("Finished")
There are examples for every API call in this SDK. You can find all of these examples in the examples folder at the root of this repo.
These examples provide:
-
Analyze Text:
- Intent Recognition - examples/analyze/intent
- Sentiment Analysis - examples/sentiment/intent
- Summarization - examples/analyze/intent
- Topic Detection - examples/analyze/intent
-
PreRecorded Audio:
- Transcription From an Audio File - examples/prerecorded/file
- Transcrption From an URL - examples/prerecorded/url
- Intent Recognition - examples/analyze/intent
- Sentiment Analysis - examples/sentiment/intent
- Summarization - examples/analyze/intent
- Topic Detection - examples/analyze/intent
-
Live Audio Transcription:
- From a Microphone - examples/streaming/microphone
- From an HTTP Endpoint - examples/streaming/http
-
Management API exercise the full CRUD operations for:
- Balances - examples/manage/balances
- Invitations - examples/manage/invitations
- Keys - examples/manage/keys
- Members - examples/manage/members
- Projects - examples/manage/projects
- Scopes - examples/manage/scopes
- Usage - examples/manage/usage
To run each example set the DEEPGRAM_API_KEY
as an environment variable, then cd
into each example folder and execute the example: go run main.py
.
Interested in contributing? We ❤️ pull requests!
To make sure our community is safe for all, be sure to review and agree to our Code of Conduct. Then see the Contribution guidelines for more information.
In order to develop new features for the SDK itself, you first need to uninstall any previous installation of the deepgram-sdk
and then install/pip the dependencies contained in the requirements.txt
then instruct python (via pip) to use the SDK by installing it locally.
From the root of the repo, that would entail:
pip uninstall deepgram-sdk
pip install -r requirements.txt
pip install -e .
If you are looking to contribute or modify pytest code, then you need to install the following dependencies:
pip install -r requirements-dev.txt
We love to hear from you so if you have questions, comments or find a bug in the project, let us know! You can either: