• Stars
    star
    158
  • Rank 237,131 (Top 5 %)
  • Language
    C++
  • License
    MIT License
  • Created about 5 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

This is a sample showing how to do real-time video analytics with NVIDIA DeepStream connected to Azure via Azure IoT Edge. It uses a NVIDIA Jetson Nano device that can process up to 8 real-time video streams concurrently.
page-type languages products
sample
c
azure-iot-edge
azure-machine-learning-service
azure-iot-hub
azure-custom-vision

NVIDIA Deepstream + Azure IoT Edge on a NVIDIA Jetson Nano

This is a sample showing how to do real-time video analytics with NVIDIA Deepstream on a NVIDIA Jetson Nano device connected to Azure via Azure IoT Edge. Deepstream is a highly-optimized video processing pipeline, capable of running deep neural networks. It is a must-have tool whenever you have complex video analytics requirements, whether its real-time or with cascading AI models. IoT Edge gives you the possibility to run this pipeline next to your cameras, where the video data is being generated, thus lowering your bandwitch costs and enabling scenarios with poor internet connectivity or privacy concerns. With this solution, you can transform cameras into sensors to know when there is an available parking spot, a missing product on a retail store shelf, an anomaly on a solar panel, a worker approaching a hazardous zone, etc.

To complete this sample, you need a NVIDIA Jetson Nano device. This device is powerful enough to process 8 video streams at a resolution of 1080p, 30 frames-per-second with a resnet10 model and is compatible with IoT Edge. If you need to process more video streams, the same code works with more powerful NVIDIA Jetson devices like the TX2 or the Xavier, and with server-class appliances that includes NVIDIA T4 or other NVIDIA Tesla GPUs.

Check out this video to see this demo in action and understand how it was built:

Deepstream On IoT Edge on Jetson Nano

Prerequisites

Jetson Nano

  • Flash your Jetson Nano SD Card: download and flash either this JetPack version 4.5 image. You can use BalenaEtcher tool to flash your SD card. Both of these images are based on Ubuntu 18.04 and already includes NVIDIA drivers version, CUDA and Nvidia-Docker.

[!WARNING] This branch only works with DeepStream 5.1, which requires JetPack 4.5 (= Release 32, Revision 3). To older versions, please look at other branches of this repo. To double check, your JetPack version, you can use the following command:

head -n 1 /etc/nv_tegra_release
  • Connect your Jetson Nano to your developer's machine with the USB Headless Mode: we'll do that by plugging a micro-USB cable from your Jetson Nano to your developer's machine and using the USB Headless Mode provided by NVIDIA. With this mode, you do not need to hook up a monitor directly to your Jetson Nano. Instead, plug your Jetson Nano to your computer with a micro-USB cable, boot your Jetson Nano and allow 1 minute for your Jetson Nano to boot. From your computer, follow the instructiosn on NVIDIA's website to connect to your Jetson Nano over the serial port.

Serial Console

  • Connect your Jetson Nano to the internet: Either use an ethernet connection, in which case you can skip this section or connect your device to WiFi using the command line:

    To connect your Jetson to a WiFi network from a terminal, follow these steps

    1. Re-scan available WiFi networks

      sudo nmcli device wifi rescan
    2. List available WiFi networks, and find the ssid_name of your network.

      sudo nmcli device wifi list
    3. Connect to a selected WiFi network

      sudo nmcli device wifi connect <ssid_name> password <password>
  • Connect your Jetson Nano to an SSH client: An SSH terminal is often more convenient than a serial terminal. So to make it easier to go through the steps of this sample, it is recommended to open an SSH connection with your favorite SSH Client.

    1. Find your IP address using the USB Device Mode terminal

      ifconfig
    2. Make sure that your laptop is on the same network as your Jetson Nano device and open an SSH connection on your Jetson Device:

      ssh your-username@your-ip-address
  • Install IoT Edge: See the Azure IoT Edge installation instructions for Ubuntu Server 18.04. Skip the Install Container Runtime section since we will be using nvidia-docker, which is already installed. Connect your device to your IoT Hub using the manual provisioning option. See this quickstart if you don't yet have an Azure IoT Hub.

  • Install VS Code and its the IoT Edge extension on your developer's machine: On your developer's machine, get VS Code and its IoT Edge extension. Configure this extension with your IoT Hub.

  • Install VLC to view RTSP video streams: On your developer's machine, install VLC.

The next sections walks you step-by-step to deploy Deepstream on an IoT Edge device and update its configuration. It explains concepts along the way. If all you want is to see the 8 video streams being processed in parallel, you can jump right to the final demo by directly deploying the deployment manifest in this repo.

Deploy Deepstream from the Azure Marketplace

We'll start by creating a new IoT Edge solution in VS Code, add the Deepstream module from the marketplace and deploy that to our Jetson Nano.

Note that you could also find Deepstream's module via the Azure Marketplace website here. You'll use VS code here since Deepstream is an SDK and typically needs to be tweaked or connected to custom modules to deliver an end-to-end solution at the edge.

In VS Code, from your development machine:

  1. Start by creating a new IoT Edge solution:

    1. Open the command palette (Ctrl+Shift+P)
    2. Select Azure IoT Edge: New IoT Edge Solution
    3. Select a parent folder
    4. Give it a name.
    5. Select Empty Solution (if prompted, accept to install iotedgehubdev)
  2. Add the Deepstream module to your solution:

    1. Open the command palette (Ctrl+Shift+P)
    2. Select Azure IoT Edge: Add IoT Edge module
    3. Select the default deployment manifest (deployment.template.json)
    4. Select Module from Azure Marketplace.
    5. It opens a new tab with all IoT Edge module offers from the Azure Marketplace. Select the Nvidia Deepstream SDK one, select the NVIDIA DeapStream SDK 5.1 for ARM plan and select the latest tag.

Deepsteam in Azure Marketplace

  1. Deploy the solution to your device:

    1. Generate IoT Edge Deployment Manifest by right clicking on the deployment.template.json file
    2. Create Deployment for Single Device by right clicking on the generated file in the /config folder
    3. Select your IoT Edge device
  2. Start monitoring the messages sent from the device to the cloud

    1. Right-click on your device (bottom left corner)
    2. Select Start Monitoring Built-In Event Endpoint

After a little while, (enough time for IoT Edge to download and start DeepStream module which is 1.75GB and compile the AI model), you should be able to see messages sent by the Deepstream module to the cloud via the IoT Edge runtime in VS Code. These messages are the results of Deepstream processing a sample video and analyzing it with an sample AI model that detects people and cars in this video and sends a message for each object found.

Telemetry sent to IoT Hub

View the processed videos

We'll now modify the configuration of the Deepstream application and the IoT Edge deployment manifest to be able to see the output video streams. We'll do that by asking Deepstream to output the inferred videos to an RTSP video stream and visualize this RTSP stream with VLC.

  1. Create your updated Deepstream config file on your Nano device: a. Open an SSH connection on your Nano device (for instance from VS Code terminal):

    ssh your-nano-username@your-nano-ip-address
    1. Create a new folder to host modified Deepstream config files
    cd /var
    sudo mkdir deepstream
    mkdir ./deepstream/custom_configs
    sudo chmod -R 777 /var/deepstream
    cd ./deepstream/custom_configs
    1. Use your favorite text editor to create a copy of the sample Deepstream configuration file:

      • Create and open a new file:
      nano test5_config_file_src_infer_azure_iotedge_edited.txt
      • Copy and save the content of the original Deepstream configuration file which you can find in this repo under test5_config_file_src_infer_azure_iotedge.txt

      • Create another configuration file specific to how messages are being sent (which is referenced in the above configuration file):

      nano dstest5_msgconv_sample_config.txt
      • Copy and save the content of the original messaging Deepstream configuration file which you can find in this repo under dstest5_msgconv_sample_config.txt
    2. Edit the configuration file:

      • Disable the first sink (FakeSink) and add a new RTSP sink with the following properties:
      [sink0]
      enable=0
      [sink3]
      enable=1
      #Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
      type=4
      #1=h264 2=h265
      codec=1
      sync=0
      bitrate=4000000
      # set below properties in case of RTSPStreaming
      rtsp-port=8554
      udp-port=5400
      • Reduce the number of inferences to be every 3 frames (see interval property) otherwise the Nano will drop some frames. In the next section, we'll use a Nano specific config to process 8 video streams in real-time:
      [primary-gie]
      enable=1
      gpu-id=0
      batch-size=4
      ## 0=FP32, 1=INT8, 2=FP16 mode
      bbox-border-color0=1;0;0;1
      bbox-border-color1=0;1;1;1
      bbox-border-color2=0;1;1;1
      bbox-border-color3=0;1;0;1
      nvbuf-memory-type=0
      interval=2
      
      • To make it easier to connect to the output RTSP stream, let's set DeepStream to continuously loop over the test input video files:
      [tests]
      file-loop=1
      
      • Save and Quit (CTRL+O, CTRL+X)
  2. Mount your updated config file in the Deepstream module by adding its createOptions in the deployment.template.json file from your development's machine:

    • Add the following to your Deepstream createOptions:
    "HostConfig":{
        "Binds": ["/var/deepstream/custom_configs/:/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-test5/custom_configs/"]
        }
    • Edit your Deepstream application working directory and entrypoint to use this updated config file via Deepstream createOptions:
    "WorkingDir": "/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-test5/custom_configs/"
    "Entrypoint":["/usr/bin/deepstream-test5-app","-c","test5_config_file_src_infer_azure_iotedge_edited.txt"]
  3. Open the RTSP port of DeepStream module so that you can visualize this feed from another device:

    • Add the following to your Deepstream createOptions, at the root:
    "ExposedPorts":{
        "8554/tcp": {}
    }
    • Add the following to your Deepstream createOptions, in the HostConfig node:
    "PortBindings": {
        "8554/tcp": [
            {
            "HostPort": "8554"
            }
        ]
    }
  4. Deploy your updated IoT Edge solution:

    1. Generate IoT Edge Deployment Manifest by right clicking on the deployment.template.json file
    2. Create Deployment for Single Device by right clicking on the generated file in the /config folder
    3. Select your IoT Edge device
    4. Start monitoring the messages sent from the device to the cloud by right clicking on the device (bottom left corner) and select Start Monitoring Built-In Event Endpoint
  5. Finally, open the default output RTSP stream generated by DeepStream with VLC:

    1. Open VLC
    2. Go to Media > Open Network Stream
    3. Paste the default RTSP Video URL generated by deepstream, which follows the format rtsp://your-nano-ip-address:8554/ds-test
    4. Click Play

You should now see messages recevied by IoT Hub via in VS Code AND see the processed video cia VLC.

Default Output of Deepstream

Process and view 8 video streams (1080p 30fps)

We'll now update Deepstream's configuration to process 8 video streams concurrently (1080p 30fps).

We'll start by updating the batch-size to 8 instead of 4 (primagy-gie / batch-size property). Then because Tthe Jetson Nano isn't capable of doing inferences on 240 frames per second with a ResNet10 model, we will instead run inferences every 5 frames (primagy-gie / interval property) and use Deepstream's built-in tracking algorithm for in-between frames, which is less computationnally intensive (tracker group). We'll also use a slightly lower inference resolution (defined via primagy-gie / config-file property). These changes are captured in the Deepstream configuration file below specific to Nano.

  1. Update your previously edited Deepstream config file:

    • Open your previous config file:
    nano test5_config_file_src_infer_azure_iotedge_edited.txt
    • Copy the content of Deepstream's configuration file named test5_config_file_src_infer_azure_iotedge_nano_8sources.txt from this repo

    • Save and Quit (CTRL+O, CTRL+X)

  2. To simulate 8 video cameras, download and add to Deepstream 8 videos files

    • Open an ssh connection on your Nano device (password=dlinano):
    ssh your-nano-username@your-nano-ip-address
    • Host these video files on your local disk
    cd /var/deepstream
    mkdir custom_streams
    sudo chmod -R 777 /var/deepstream
    cd ./custom_streams
    • Download the video files
    wget -O cars-streams.tar.gz --no-check-certificate "https://onedrive.live.com/download?cid=0C0A4A69A0CDCB4C&resid=0C0A4A69A0CDCB4C%21588371&authkey=AAavgrxG95v9gu0"
    • Un-compress the video files
    tar -xzvf cars-streams.tar.gz
    • Mount these video streams by adding the following binding via the HostConfig node of Deepstream's createOptions:
    "Binds": [
            "/var/deepstream/custom_configs/:/root/deepstream_sdk_v4.0.2_jetson/sources/apps/sample_apps/deepstream-test5/custom_configs/",
            "/var/deepstream/custom_streams/:/root/deepstream_sdk_v4.0.2_jetson/sources/apps/sample_apps/deepstream-test5/custom_streams/"
            ]
  3. Verify that your are still using your updated configuration file and still expose Deepstream's RTSP port (8554). You can double check your settings by comparing your deployment file to the one in this repo.

  4. To speed up IoT Edge message throughput, configure the edgeHub to use an in-memory store. In your deployment manifest, set the usePersistentStorage environment variable to false in edgeHub configuration (next to its settings node) and disable unused protocol heads (DeepStream uses MQTT to communicate with the EdgeHub):

    "edgeHub": {
                    "env": {
                      "usePersistentStorage": {
                        "value": "false"
                      },
                      "amqpSettings__enabled": {
                        "value": false
                      },
                      "httpSettings__enabled": {
                        "value": false
                      }
                    }
    }
  5. Deploy your updated IoT Edge solution:

    1. Generate IoT Edge Deployment Manifest by right clicking on the deployment.template.json file
    2. Create Deployment for Single Device by right clicking on the generated file in the /config folder
    3. Select your IoT Edge device
  6. Finally, wait a few moments for DeepStream to restart and open the default output RTSP stream generated by DeepStream with VLC:

    1. Open VLC
    2. Go to Media > Open Network Stream
    3. Paste the default RTSP Video URL generated by deepstream, which follows the format rtsp://your-nano-ip-address:8554/ds-test
    4. Click Play

You should now see the 8 video streams being processed and displayed via VLC.

8 video streams processed real-time

Use your own AI model with Custom Vision

Finally, let's use a custom AI model instead of DeepStream's default one. We'll take the use case of a soda can manufaturer who wants to improve the efficienty of its plant by detecting soda cans that fell down on production lines. We'll use simulated cameras to monitor each of the lines, collect images, train a custom AI model with Custom Vision which is a no-code computer vision AI model builder, to detects cans that are up or down and then deploy this custom AI model to DeepStream.

  1. Let's start by creating a new Custom Vision project in your Azure subscription:

    • Go to http://customvision.ai
    • Sign-in
    • Create a new Project
    • Give it a name like Soda Cans Down
    • Pick up your resource, if none select create new and select SKU - F0 (F0 is free) or (S0)
    • Select Project Type = Object Detection
    • Select Domains = General (Compact)

We've already collected training images for you. Download this compressed folder, unzip it and upload the training images to Custom Vision.

  1. We then need to label all of them:

    • Click on an image
    • Label the cans that are up as Up and the ones that are down as Down
    • Hit the right arrow to move on to the next image and label the remaining 70+ images...or read below to use a pre-built model with this set of images

Labelling in Custom Vision

  1. Once you're done labeling, let's train and export your model:

    • Train your model by clicking on Train
    • Export it by going to the Performance tab, clicking on Export and choosing ONNX
    • Download your custom AI model and unzip it
  2. Finally, we'll deploy this custom vision model to the Jetson Nano and configure DeepStream to use this model.

    • Open an ssh connection on your Nano device (password=dlinano):

      ssh your-nano-username@your-nano-ip-address
    • Create a folder to store your custom model:

      cd /var/deepstream
      sudo mkdir custom_models
      sudo chmod -R 777 /var/deepstream
      cd ./custom_models
    • Copy this custom model to your Jetson Nano, either by copying your own model with scp or by using this pre-built one:

      wget -O cans-onnx-model.tar.gz --no-check-certificate "https://onedrive.live.com/download?cid=0C0A4A69A0CDCB4C&resid=0C0A4A69A0CDCB4C%21588388&authkey=AC4OIGTkjg_t5Cc"
      tar -xzvf cans-onnx-model.tar.gz
    • For DeepStream to understand how to parse the bounding boxes provided by a model from Custom Vision, we need to download an extra library:

      wget -O libnvdsinfer_custom_impl_Yolo_Custom_Vision.so --no-check-certificate "https://onedrive.live.com/download?cid=0C0A4A69A0CDCB4C&resid=0C0A4A69A0CDCB4C%21595626&authkey=AC9Lfp4wuXSTFz4"
    • Download raw video streams that we'll use to simulate cameras

      cd ../custom_streams
      wget -O cans-streams.tar.gz --no-check-certificate "https://onedrive.live.com/download?cid=0C0A4A69A0CDCB4C&resid=0C0A4A69A0CDCB4C%21588372&authkey=AJfRMnW2qvR3OC4"
      tar -xzvf cans-streams.tar.gz
    • Edit DeepStream configuration file to point to the updated video stream inputs and your custom vision model:

      • Open DeepStream configuration file:
      cd ../custom_configs
      nano test5_config_file_src_infer_azure_iotedge_edited.txt
      • Copy the content of Deepstream's configuration file named test5_config_file_src_infer_azure_iotedge_nano_custom_vision.txt from this repo

      • Save and Quit (CTRL+O, CTRL+X)

      • Create another configuration file specific to the inference engine (which is referenced in the above configuration file):

      nano config_infer_custom_vision.txt
      • Copy the content of inference's configuration file named config_infer_custom_vision.txt from this repo
      • Double check that the num-detected-classes property maps to the number of classes or objects that you've trained your custom vision model for.
      • Save and Quit (CTRL+O, CTRL+X)
      • Create a last configuration file to name your cameras (which is referenced via the camera-id property in the main DeepStream configuration file):
      nano msgconv_config_soda_cans.txt
      • Copy the content of inference's configuration file named msgconv_config_soda_cans.txt from this repo
      • Save and Quit (CTRL+O, CTRL+X)
  • Mount these video streams, models, configuration files by adding the following bindings via the HostConfig node of Deepstream's createOptions:
"Binds": [
        "/var/deepstream/custom_configs/:/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-test5/custom_configs/",
        "/var/deepstream/custom_streams/:/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-test5/custom_streams/",
        "/var/deepstream/custom_models/:/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-test5/custom_models/"
        ]
  1. Deploy your updated IoT Edge solution:

    1. Generate IoT Edge Deployment Manifest by right clicking on the deployment.template.json file
    2. Create Deployment for Single Device by right clicking on the generated file in the /config folder
    3. Select your IoT Edge device
  2. Finally, wait a few moments for DeepStream to restart and open the default output RTSP stream generated by DeepStream with VLC:

    1. Open VLC
    2. Go to Media > Open Network Stream
    3. Paste the default RTSP Video URL generated by deepstream, which follows the format rtsp://your-nano-ip-address:8554/ds-test
    4. Click Play

We are now visualizing the processing of 3 real time (e.g. 30fps 1080p) video streams with a custom vision AI models that we built in minutes to detect custom visual anomalies!

Custom Vision

Going further

Learn more about DeepStream

A great learning resource to learn more about DeepStream is this free online course by NVIDIA.

Deesptream's SDK based on GStreamer. It is very modular with its concepts of plugins. Each plugins has sinks and sources. NVIDIA provides several plugins as part of Deepstream which are optimized to leverage NVIDIA's GPUs. How these plugins are connected with each others is defined in the application's configuration file.

You can learn more about its architecture in NVIDIA's official documentation (sneak peak below).

NVIDIA Deepstream Application Architecture

Tips to edit your DeepStream application

Make a quick configuration change

To quickly change a value in your config file, leverage the fact that it is being mounted from a local file so all you have to do is (for instance via an ssh terminal):

  1. Open your config file (in /var/deepstream/custom_configs in this sample)

  2. Make your changes and save

  3. Restart Deepstream container

    iotedge restart NVIDIADeepStreamSDK

This assumes that you did not change any file names and thus the same IoT Edge deployment manifest applies.

Use your own source videos and / or AI models

To use your own source videos and AI models and quickly iterate on them, you can use the same technique used in this sample: mounting local folders with these assets. By doing that, you can quickly iterate on your assets, without any code change or re-compilation.

Use live RTSP streams as inputs

It is a very common configuration to have DeepStream take several live RTSP streams as inputs. All you have to do is modify DeepStream's configuration file and update its source group:

type=4
uri=rtsp://127.0.0.1:554/rtsp_path

and update its streammux group:

live-source=1

To output an RTSP stream with the final result, Deepstream can output RTSP videos on Tesla platforms but not on Jetson platforms for now. There is currently a limitation on RTSP encoding on Jetson platforms.

Learn all the configuration options available from Deepstream's documentation

Deepstream supports a wide varity of options, a lot of which are available via configuraiton changes. To learn more about them, go to Deepstream's documentation:

Troubleshoot your DeepStream module

To debug your DeepStream module, look at the last 200 lines of its logs:

iotedge logs NVIDIADeepStreamSDK --tail 200 -f

Verify your Deepstream module docker options

Sometimes it is helpful to verify the options that Docker took into account when creating your Deepstream container via IoT Edge. It is particularly useful to double-check the folders that have been mounted in your container. The simplest way to do that is to use the docker inspect command:

sudo docker inspect NVIDIADeepStreamSDK

F. A.Q.

Is Moby officially supported with DeepStream and IoT Edge?

While Moby does, IoT Edge does not yet support the new way to mount NVIDIA GPUs into a Docker container. This support is planned with release 1.0.10 of IoT Edge for early 2020. For now, you still need to use the previous nvidia-docker runtime with Docker CE, which is installed by default on Jetson Nano. That's why Deepstream SDK on IoT Edge is currently in preview.

Which AI models does Deepstream support?

Deepstream relies on NVIDIA TensorRT in do the inferencing. Thus any AI models supported by TensorRT is supported with Deepstream. In practice, most of AI models are supported by TensorRT. See this list of all layers supported by TensorRT.

Of course it accepts AI models in TensorRT format but can also convert TensorFlow and ONNX models (see this documentation for more details on the ONNX -> TensorRT parser). Conversion is done automatically when launching the application.

You can thus build your AI model with Azure Machine Learning and work with ONNX format or use Custom Vision with their ONNX export. Instructions to use Custom Vision will soon be added to this repo.

You can also use pre-built models made freely available by NVIDIA here and customize them using NVIDIA's Transfer Learning Toolkit.

How to format IoT Edge messages?

The Gst-nvmsgbroker plugin is the one sending output messages. Its full documentation is available here.

By default, you can use the topic property in Deepstream to set up the output of the Deepstream modules and define your routes in IoT Edge appropriately.

How to manage AI model versions at scale?

Iterating on a local model & config file locally and bind mounting them to the container is only recommended during active development, but it does not scale. To manage your application (AI model & config files artifacts in particular), you have two options:

  1. Package everything into one container. Have the artifacts you expect to change regularly like your AI model and config files in the latest layers of your docker container so that most of your docker image remains unchanged when updating those. Each model change will require a new module update.
  2. Use a separate 'artifacts' module to deliver these artifacts and bind mount them to the Deepstream module. That way you can use either twins or your own methods to configure your 'artifacts' module at scale.

Why is Deepstream running as one IoT Edge module with its own plugins vs plugins in different modules?

Deepstream does a lot of optimizations to be able to handle many video streams such as:

  • zero in-memory copy, which is much easier to achieve from the same container
  • pushing the entire pipeline on a GPU card, which requires the entire pipeline to be part of the same container to avoid hardware access conflicts

These types of optimizations only work when the entire pipeline is running in the same container and thus as one IoT Edge module. The output of Deepstream module can however be sent to other modules running on the same device, typically to run some business logic code or filtering logic.

When should you consider building your own Deepstream application vs reconfiguring an existing one?

For some use cases, the default Deepstream app is not enough. Whenever the changes are required in the plugin pipeline, configuration changes are not enough and a Deepstream app needs to be re-compiled.

A common example of a different pipeline is to have cascading AI models(ex: AI 1- detect a package, AI 2- detect a barcode, etc.).

To build your own Deepstream application or even build your own Deepstream plugin, you can follow this link: Deepstream documentation.

What performance charateristics can you expect from Deepstream application across NVIDIA GPUs?

NVIDIA published some performance benchmarks on their documentation website.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

More Repositories

1

azure-search-openai-demo

A sample app for the Retrieval-Augmented Generation pattern running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences.
Python
5,707
star
2

cognitive-services-speech-sdk

Sample code for the Microsoft Cognitive Services Speech SDK
C#
1,955
star
3

graphrag-accelerator

One-click deploy of a Knowledge Graph powered RAG (GraphRAG) in Azure
Python
1,730
star
4

active-directory-aspnetcore-webapp-openidconnect-v2

An ASP.NET Core Web App which lets sign-in users (including in your org, many orgs, orgs + personal accounts, sovereign clouds) and call Web APIs (including Microsoft Graph)
PowerShell
1,366
star
5

openai

The repository for all Azure OpenAI Samples complementing the OpenAI cookbook.
Jupyter Notebook
1,090
star
6

contoso-real-estate

Intelligent enterprise-grade reference architecture for JavaScript, featuring OpenAI integration, Azure Developer CLI template and Playwright tests.
JavaScript
881
star
7

Cognitive-Speech-TTS

Microsoft Text-to-Speech API sample code in several languages, part of Cognitive Services.
C#
870
star
8

chat-with-your-data-solution-accelerator

A Solution Accelerator for the RAG pattern running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences. This includes most common requirements and best practices.
Python
816
star
9

blockchain

Azure Blockchain Content and Samples
HTML
786
star
10

serverless-chat-langchainjs

Build your own serverless AI Chat with Retrieval-Augmented-Generation using LangChain.js, TypeScript and Azure
Bicep
694
star
11

azure-search-openai-demo-csharp

A sample app for the Retrieval-Augmented Generation pattern running in Azure, using Azure Cognitive Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences.
C#
638
star
12

Serverless-microservices-reference-architecture

This reference architecture walks you through the decision-making process involved in designing, developing, and delivering a serverless application using a microservices architecture through hands-on instructions for configuring and deploying all of the architecture's components along the way. The goal is to provide practical hands-on experience in working with several Azure services and the technologies that effectively use them in a cohesive and unified way to build a serverless-based microservices architecture.
C#
493
star
13

modern-data-warehouse-dataops

DataOps for the Modern Data Warehouse on Microsoft Azure. https://aka.ms/mdw-dataops.
Shell
486
star
14

openai-plugin-fastapi

A simple ChatGPT Plugin running in Codespaces for dev and Azure for production.
Bicep
432
star
15

Azure-MachineLearning-DataScience

HTML
407
star
16

raspberry-pi-web-simulator

Raspberry Pi web simulator. Demo address:
JavaScript
406
star
17

contoso-chat

This sample has the full End2End process of creating RAG application with Prompt Flow and AI Studio. It includes GPT 3.5 Turbo LLM application code, evaluations, deployment automation with AZD CLI, GitHub actions for evaluation and deployment and intent mapping for multiple LLM task mapping.
Jupyter Notebook
400
star
18

MyDriving

Building IoT or Mobile solutions are fun and exciting. This year for Build, we wanted to show the amazing scenarios that can come together when these two are combined. So, we went and developed a sample application. MyDriving uses a wide range of Azure services to process and analyze car telemetry data for both real-time insights and long-term patterns and trends. The following features are supported in the current version of the mobile app.
C#
387
star
19

azure-voting-app-redis

Azure voting app used in docs.
Shell
370
star
20

azure-search-knowledge-mining

Azure Search Knowledge Mining Accelerator
CSS
370
star
21

azure-cli-samples

Contains Azure CLI scripts samples used for documentation at https://docs.microsoft.com
Shell
353
star
22

Synapse

Samples for Azure Synapse Analytics
Jupyter Notebook
348
star
23

nodejs-docs-hello-world

A simple nodejs application for docs
JavaScript
347
star
24

cognitive-services-quickstart-code

Code Examples used by the Quickstarts in the Cognitive Services Documentation
Jupyter Notebook
346
star
25

saga-orchestration-serverless

An orchestration-based saga implementation reference in a serverless architecture
C#
340
star
26

container-apps-store-api-microservice

Sample microservices solution using Azure Container Apps, Dapr, Cosmos DB, and Azure API Management
Shell
323
star
27

azure-sdk-for-go-samples

Examples of how to utilize Azure services from Go.
Go
296
star
28

AzureMapsCodeSamples

A set of code samples for the Azure Maps web control.
JavaScript
293
star
29

active-directory-dotnet-native-aspnetcore-v2

Calling a ASP.NET Core Web API from a WPF application using Azure AD v2.0
C#
280
star
30

jp-azureopenai-samples

Python
270
star
31

active-directory-b2c-custom-policy-starterpack

Azure AD B2C now allows uploading of a Custom Policy which allows full control and customization of the Identity Experience Framework
268
star
32

azureai-samples

Official community-driven Azure AI Examples
Jupyter Notebook
260
star
33

azure-batch-samples

Azure Batch and HPC Code Samples
C#
256
star
34

active-directory-b2c-dotnet-webapp-and-webapi

A combined sample for a .NET web application that calls a .NET web API, both secured using Azure AD B2C
JavaScript
244
star
35

openai-dotnet-samples

Azure OpenAI .NET Samples
Jupyter Notebook
236
star
36

streaming-at-scale

How to implement a streaming at scale solution in Azure
C#
234
star
37

azure-files-samples

This repository contains supporting code (PowerShell modules/scripts, ARM templates, etc.) for deploying, configuring, and using Azure Files.
PowerShell
231
star
38

azure-search-openai-javascript

A TypeScript sample app for the Retrieval Augmented Generation pattern running on Azure, using Azure AI Search for retrieval and Azure OpenAI and LangChain large language models (LLMs) to power ChatGPT-style and Q&A experiences.
TypeScript
231
star
39

service-fabric-dotnet-getting-started

Get started with Service Fabric with these simple introductory sample projects.
CSS
230
star
40

ansible-playbooks

Ansible Playbook Samples for Azure
226
star
41

iot-edge-opc-plc

Sample OPC UA server with nodes that generate random and increasing data, anomalies and much more ...
C#
222
star
42

cognitive-services-REST-api-samples

This is a repo for cognitive services REST API samples in 4 languages: C#, Java, Node.js, and Python.
HTML
217
star
43

active-directory-dotnetcore-daemon-v2

A .NET Core daemon console application calling Microsoft Graph or your own WebAPI with its own identity
PowerShell
215
star
44

active-directory-b2c-advanced-policies

Sample for use with Azure AD B2C with Custom Policies.
C#
215
star
45

powerbi-powershell

Samples for calling the Power BI REST API via PowerShell
PowerShell
207
star
46

ms-identity-python-webapp

A Python web application calling Microsoft graph that is secured using the Microsoft identity platform
PowerShell
207
star
47

ms-identity-javascript-react-tutorial

A chapterwise tutorial that will take you through the fundamentals of modern authentication with Microsoft identity platform in React using MSAL React
JavaScript
204
star
48

SpeechToText-WebSockets-Javascript

SDK & Sample to do speech recognition using websockets in Javascript
TypeScript
200
star
49

azure-iot-samples-csharp

Provides a set of easy-to-understand samples for using Azure IoT Hub and Azure IoT Hub Device Provisioning Service and Azure IoT Plug and Play using C# SDK.
C#
196
star
50

digital-twins-explorer

A code sample for visualizing Azure Digital Twins graphs as a web application to create, edit, view, and diagnose digital twins, models, and relationships.
JavaScript
184
star
51

AI-Gateway

APIM ❀️ OpenAI - this repo contains a set of experiments on using GenAI capabilities of Azure API Management with Azure OpenAI and other services
Jupyter Notebook
182
star
52

Serverless-Eventing-Platform-for-Microservices

This solution is a personal knowledge management system and it allows users to upload text, images, and audio into categories. Each of these types of data is managed by a dedicated microservice built on Azure serverless technologies including Azure Functions and Cognitive Services. The web front-end communicates with the microservices through a SignalR-to-Event Grid bridge, allowing for real-time reactive UI updates based on the microservice updates. Each microservice is built and deployed independently using VSTS’s build and release management system, and use a variety of Azure-native data storage technologies.
C#
176
star
53

Custom-vision-service-iot-edge-raspberry-pi

Sample showing how to deploy a AI model from the Custom Vision service to a Raspberry Pi 3 device using Azure IoT Edge
Python
176
star
54

active-directory-angularjs-singlepageapp

An AngularJS based single page app, implemented with an ASP.NET Web API backend, that signs in users and calls web APIs using Azure AD
JavaScript
171
star
55

IoTDemos

Demos created by the IoT Engineering team that showcase IoT services in an end-to-end solution
CSS
171
star
56

ms-identity-aspnet-webapp-openidconnect

A sample showcasing how to develop a web application that handles sign on via the unified Azure AD and MSA endpoint, so that users can sign in using both their work/school account or Microsoft account. The sample also shows how to use MSAL to obtain a token for invoking the Microsoft Graph, as well as incrementental consent.
170
star
57

cosmos-db-design-patterns

A collection of design pattern samples for building applications and services with Azure Cosmos DB for NoSQL.
C#
167
star
58

ms-identity-javascript-angular-tutorial

A chapterwise tutorial that will take you through the fundamentals of modern authentication with Microsoft identity platform in Angular using MSAL Angular v2
TypeScript
165
star
59

azure-python-labs

Labs demonstrating how to use Python with Azure, Visual Studio Code, GitHub, Windows Subsystem for Linux, and more!
Python
164
star
60

active-directory-b2c-javascript-msal-singlepageapp

A single page application (SPA) calling a Web API. Authentication is done with Azure AD B2C by leveraging MSAL.js
JavaScript
164
star
61

cosmosdb-chatgpt

Sample application that combines Azure Cosmos DB with Azure OpenAI ChatGPT service
HTML
163
star
62

active-directory-b2c-dotnetcore-webapp

An ASP.NET Core web application that can sign in a user using Azure AD B2C, get an access token using MSAL.NET and call an API.
C#
160
star
63

active-directory-xamarin-native-v2

This is a simple Xamarin Forms app showcasing how to use MSAL.NET to authenticate work or school and Microsoft personal accounts with the Microsoft identity platform, and access the Microsoft Graph with the resulting token.
C#
160
star
64

cognitive-services-python-sdk-samples

Learn how to use the Cognitive Services Python SDK with these samples
Python
159
star
65

active-directory-dotnet-webapp-openidconnect

A .NET MVC web application that uses OpenID Connect to sign-in users from a single Azure Active Directory tenant.
JavaScript
159
star
66

openhack-devops-team

DevOps OpenHack Team environment APIs
C#
153
star
67

semantic-kernel-rag-chat

Tutorial for ChatGPT + Enterprise Data with Semantic Kernel, OpenAI, and Azure Cognitive Search
C#
147
star
68

azure-search-power-skills

A collection of useful functions to be deployed as custom skills for Azure Cognitive Search
C#
146
star
69

azure-spring-boot-samples

Spring Cloud Azure Samples
JavaScript
146
star
70

service-fabric-dotnet-web-reference-app

An end-to-end Service Fabric application that demonstrates patterns and features in a web application scenario.
C#
144
star
71

aks-store-demo

Sample microservices app for AKS demos, tutorials, and experiments
Bicep
142
star
72

active-directory-b2c-javascript-nodejs-webapi

A small Node.js Web API for Azure AD B2C that shows how to protect your web api and accept B2C access tokens using Passport.js.
JavaScript
141
star
73

Serverless-APIs

Guidance for building serverless APIs with Azure Functions and API Management.
C#
139
star
74

blockchain-devkit

Samples of how to integrate, connect and use devops to interact with Azure blockchain
Kotlin
138
star
75

storage-blob-dotnet-getting-started

The getting started sample demonstrates how to perform common tasks using the Azure Blob Service in .NET including uploading a blob, CRUD operations, listing, as well as blob snapshot creation.
C#
135
star
76

active-directory-dotnet-webapp-openidconnect-aspnetcore

An ASP.NET Core web application that signs-in Azure AD users from a single Azure AD tenant.
HTML
132
star
77

power-bi-embedded-integrate-report-into-web-app

A Power BI Embedded sample that shows you how to integrate a Power BI report into your own web app
JavaScript
131
star
78

azure-event-grid-viewer

Live view of events from Azure Event Grid with ASP.NET Core and SignalR
HTML
130
star
79

active-directory-dotnet-webapi-manual-jwt-validation

How to manually process a JWT access token in a web API using the JSON Web Token Handler For the Microsoft .Net Framework 4.5.
C#
129
star
80

azure-opensource-labs

Azure Open Source Labs (https://aka.ms/oss-labs)
Bicep
128
star
81

azure-video-indexer-samples

Contains the Azure Media Services Video Indexer samples
Python
128
star
82

active-directory-lab-hybrid-adfs

Create a full AD/CA/ADFS/WAP lab environment with Azure AD Connect installed
PowerShell
125
star
83

service-fabric-dotnet-quickstart

Service Fabric quickstart .net application sample
C#
125
star
84

jmeter-aci-terraform

Scalable cloud load/stress testing pipeline solution with Apache JMeter and Terraform to dynamically provision and destroy the required infrastructure on Azure.
HCL
120
star
85

active-directory-dotnet-desktop-msgraph-v2

Sample showing how a Windows desktop .NET (WPF) application can get an access token using MSAL.NET and call the Microsoft Graph API or other APIs protected by the Microsoft identity platform (Azure Active Directory v2)
C#
120
star
86

active-directory-dotnet-webapp-webapi-openidconnect-aspnetcore

An ASP.NET Core web application that authenticates Azure AD users and calls a web API using OAuth 2.0 access tokens.
C#
119
star
87

ms-identity-aspnet-daemon-webapp

A web application that sync's data from the Microsoft Graph using the identity of the application, instead of on behalf of a user.
C#
117
star
88

active-directory-dotnet-webapp-multitenant-openidconnect

A sample .NET 4.5 MVC web app that signs-up and signs-in users from any Azure AD tenant using OpenID Connect.
JavaScript
116
star
89

azure-intelligent-edge-patterns

Samples for Intelligent Edge Patterns
JavaScript
114
star
90

cognitive-services-sample-data-files

Cognitive Services sample data files
113
star
91

python-docs-hello-world

A simple python application for docs
Python
113
star
92

azure-ai

A hub with a curated awesome list of all Azure AI samples
112
star
93

Cognitive-Speech-STT-Windows

Windows SDK for the Microsoft Speech-to-Text API, part of Cognitive Services
111
star
94

durablefunctions-apiscraping-dotnet

Build an Azure Durable Functions that will scrape GitHub for opened issues and store them on Azure Storage.
C#
111
star
95

active-directory-b2c-xamarin-native

This is a simple Xamarin Forms app showcasing how to use MSAL to authenticate users via Azure Active Directory B2C, and access a Web API with the resulting tokens.
C#
110
star
96

cognitive-services-dotnet-sdk-samples

Learn how to use the Cognitive Services SDKs with these samples
C#
108
star
97

active-directory-dotnet-daemon

A Windows console application that calls a web API using its app identity (instead of a user's identity) to get access tokens in an unattended job or process.
C#
107
star
98

azure-samples-python-management

This repo contains sample code for management libraries of Azure SDK for Python
Python
105
star
99

private-aks-cluster-terraform-devops

This sample shows how to create a private AKS cluster using Terraform and Azure DevOps.
HCL
105
star
100

ms-identity-java-webapp

A Java web application calling Microsoft graph that is secured using the Microsoft identity platform
Java
105
star