• Stars
    star
    369
  • Rank 115,686 (Top 3 %)
  • Language
    C++
  • License
    MIT License
  • Created over 4 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Sample apps to demonstrate how to deploy models trained with TAO on DeepStream

Integrate TAO model with DeepStream SDK

Description

This repository provides a DeepStream sample application based on NVIDIA DeepStream SDK to run eleven TAO models (Faster-RCNN / YoloV3 / YoloV4 / YoloV5 /SSD / DSSD / RetinaNet/ PeopleSegNet/ UNET/ multi_task/ peopleSemSegNet) with below files:

  • apps: sample application for detection models and segmentation models
  • configs: DeepStream nvinfer configure file and label files
  • post_processor: include inference postprocessor for the models
  • graphs: DeepStream sample graphs based on the Graph Composer tools.
  • models: The models which will be used as samples.
  • TRT-OSS: The OSS nvinfer plugin build and download instructions. The OSS plugins are not needed since DeepStream 6.1.1 GA.

The pipeline of the sample:

                                                                           |-->filesink(save the output in local dir)
                                                            |--> encode -->
                                                                           |-->fakesink(use -f option)
uridecoderbin -->streammux-->nvinfer(detection)-->nvosd-->
                                                            |--> display

Prerequisites

  • DeepStream SDK 6.3 GA

    Make sure deepstream-test1 sample can run successful to verify your installation. According to the document, please run below command to install additional audio video packages.

    /opt/nvidia/deepstream/deepstream/user_additional_install.sh
    
  • Eigen development packages

      sudo apt install libeigen3-dev
      cd /usr/include
      sudo ln -sf eigen3/Eigen Eigen
    

Download

1. Download Source Code with SSH or HTTPS

sudo apt update
sudo apt install git-lfs
git lfs install --skip-repo
// SSH
git clone [email protected]:NVIDIA-AI-IOT/deepstream_tao_apps.git
// or HTTPS
git clone https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps.git

2. Download Models

Run below script to download models except multi_task and YoloV5 models.

sudo ./download_models.sh  # (sudo not required in case of docker containers)

For multi_task, refer to https://docs.nvidia.com/tao/tao-toolkit/text/multitask_image_classification.html to train and generate the model.

For yolov5, refer to yolov5_gpu_optimization to generate the onnx model

Triton Inference Server

The sample provides three inferencing methods. For the TensorRT based gst-nvinfer inferencing, please skip this part.

The DeepStream sample application can work as Triton client with the Triton Inference Server, one of the following two methods can be used to set up the Triton Inference Server before starting a gst-nvinferserver inferncing DeepStream application.

For the TAO sample applications, please enable Triton or Triton gRPC inferencing with the app YAML configurations.

E.G. With apps/tao_detection/ds-tao-detection, the "primary-gie" part in configs/app/det_app_frcnn.yml can be modified as following:

primary-gie:
  #0:nvinfer, 1:nvinfeserver
  plugin-type: 1
  #dssd
  #config-file-path: ../nvinfer/dssd_tao/pgie_dssd_tao_config.yml
  config-file-path: ../triton/dssd_tao/pgie_dssd_tao_config.yml
  #config-file-path: ../triton-grpc/dssd_tao/pgie_dssd_tao_config.yml

And then run the app with the command:

./apps/tao_detection/ds-tao-detection configs/app/det_app_frcnn.yml

Build

Build Sample Application

export CUDA_MODULE_LOADING=LAZY
export CUDA_VER=xy.z                                      // xy.z is CUDA version, e.g. 10.2
make

Run


1.Usage: ds-tao-detection -c pgie_config_file -i <H264 or JPEG file uri> [-b BATCH] [-d] [-f] [-l]
    -h: print help info
    -c: pgie config file, e.g. pgie_frcnn_tao_config.txt
    -i: uri of the input file, start with the file:///, e.g. file:///.../video.mp4
    -b: batch size, this will override the value of "batch-size" in pgie config file
    -d: enable display, otherwise it will dump to output MP4 or JPEG file without -f option
    -f: use fakesink mode
    -l: use loop mode

2.Usage: ds-tao-detection <yaml file uri>
  e.g.
  ./apps/tao_detection/ds-tao-detection configs/app/det_app_frcnn.yml


note: If you want use multi-source, you can input multi -i input(e.g., -i uri -i uri...) 
      Only YAML configurations support Triton and Triton gRPC inferencing.

For detailed model information, pleasing refer to the following table:

note:
The default $DS_SRC_PATH is /opt/nvidia/deepstream/deepstream

Model Type Tao Model Demo
detector dssd, peoplenet_transformer, efficientdet, frcnn, retinanet, retail_detector_100, retail_detector_binary, ssd, yolov3, yolov4-tiny, yolov4, yolov5 ./apps/tao_detection/ds-tao-detection -c configs/dssd_tao/pgie_dssd_tao_config.txt -i file:///$DS_SRC_PATH/samples/streams/sample_720p.mp4
or
./apps/tao_detection/ds-tao-detection configs/app/det_app_frcnn.yml
classifier multi-task ./apps/tao_classifier/ds-tao-classifier -c configs/multi_task_tao/pgie_multi_task_tao_config.txt -i file:///$DS_SRC_PATH/samples/streams/sample_720p.mp4
or
./apps/tao_classifier/ds-tao-classifier configs/app/multi_task_app_config.yml
segmentation peopleSemSegNet, unet, citySemSegFormer ./apps/tao_segmentation/ds-tao-segmentation -c configs/peopleSemSegNet_tao/pgie_peopleSemSegNet_tao_config.txt -i file:///$DS_SRC_PATH/samples/streams/sample_720p.mp4
or
./apps/tao_segmentation/ds-tao-segmentation configs/app/seg_app_unet.yml
instance segmentation peopleSegNet export SHOW_MASK=1; ./apps/tao_detection/ds-tao-detection -c configs/peopleSegNet_tao/pgie_peopleSegNet_tao_config.txt -i file:///$DS_SRC_PATH/samples/streams/sample_720p.mp4
or
export SHOW_MASK=1; ./apps/tao_detection/ds-tao-detection configs/app/ins_seg_app_peopleSegNet.yml
others FaceDetect, Facial Landmarks Estimation, EmotionNet, Gaze Estimation, GestureNet, HeartRateNet, BodyPoseNet,Re-identification, Retail Object Recognition, PoseClassificationNet refer detailed README for how to configure and run the model

Building the TensorRT engine of citySemSegFormer consumes a lot of device memory. Please export CUDA_MODULE_LOADING=LAZY to reduce device memory consumption. Please read CUDA Environment Variables for details.

Information for Customization

If you want to do some customization, such as training your own TAO models, running the model in other DeepStream pipeline, you should read below sections.

TAO Models

To download the sample models that we have trained with NVIDIA TAO Toolkit SDK , run wget https://nvidia.box.com/shared/static/taqr2y52go17x1ymaekmg6dh8z6d43wr -O models.zip

Refer TAO Doc for how to train the models, after training finishes, run tao-export to generate an .etlt model. This .etlt model can be deployed into DeepStream for fast inference as this sample shows. This DeepStream sample app also supports the TensorRT engine(plan) file generated by running the tao-converter tool on the .etlt model. The TensorRT engine file is hardware dependent, while the .etlt model is not. You may specify either a TensorRT engine file or a .etlt model in the DeepStream configuration file.

Note, for Unet/peopleSemSegNet/yolov3/yolov4/yolov5 model, you can also convert the etlt model to TensorRT engine file using tao-converter like following:

tao-converter -e models/unet/unet_resnet18.etlt_b1_gpu0_fp16.engine -p input_1,1x3x608x960,1x3x608x960,1x3x608x960 -t fp16 -k tlt_encode -m 1 tlt_encode models/unet/unet_resnet18.etlt

Label Files

The label file includes the list of class names for a model, which content varies for different models.
User can find the detailed label information for the MODEL in the README.md and the label file under configs/$(MODEL)_tao/, e.g. ssd label informantion under configs/ssd_tao/

Note, for some models like FasterRCNN, DON'T forget to include "background" lable and change num-detected-classes in pgie configure file accordingly

DeepStream configuration file

The DeepStream configuration file includes some runtime parameters for DeepStream nvinfer plugin or nvinferserver plugin, such as model path, label file path, TensorRT inference precision, input and output node names, input dimensions and so on.
In this sample, each model has its own DeepStream configuration file, e.g. pgie_dssd_tao_config.txt for DSSD model. Please refer to DeepStream Development Guide for detailed explanations of those parameters.

Model Outputs

1~3. Yolov3 / YoloV4 / Yolov4-tiny / Yolov5

The model has the following four outputs:

  • num_detections: A [batch_size] tensor containing the INT32 scalar indicating the number of valid detections per batch item. It can be less than keepTopK. Only the top num_detections[i] entries in nmsed_boxes[i], nmsed_scores[i] and nmsed_classes[i] are valid
  • nmsed_boxes: A [batch_size, keepTopK, 4] float32 tensor containing the coordinates of non-max suppressed boxes
  • nmsed_scores: A [batch_size, keepTopK] float32 tensor containing the scores for the boxes
  • nmsed_classes: A [batch_size, keepTopK] float32 tensor containing the classes for the boxes

4~7. RetinaNet / DSSD / SSD/ FasterRCNN

These three models have the same output layer named NMS which implementation can refer to TRT OSS nmsPlugin:

  • an output of shape [batchSize, 1, keepTopK, 7] which contains nmsed box class IDs(1 value), nmsed box scores(1 value) and nmsed box locations(4 value)
  • another output of shape [batchSize, 1, 1, 1] which contains the output nmsed box count.

8. PeopleSegNet

The model has the following two outputs:

  • generate_detections: A [batchSize, keepTopK, C*6] tensor containing the bounding box, class id, score
  • mask_head/mask_fcn_logits/BiasAdd: A [batchSize, keepTopK, C+1, 28*28] tensor containing the masks

9~11. UNET/PeopleSemSegNet/CitySemSegFormer

  • argmax_1/output: A [batchSize, H, W, 1] tensor containing the class id per pixel location

12. multi_task

  • refer detailed README for how to configure and run the model

13~14. EfficientDet / Retail Object Detection

Please note there are two Retail Object Detection models. These models have the following four outputs:

  • num_detections: This is a [batch_size, 1] tensor of data type int32. The last dimension is a scalar indicating the number of valid detections per batch image. It can be less than max_output_boxes. Only the top num_detections[i] entries in nms_boxes[i], nms_scores[i] and nms_classes[i] are valid.
  • detection_boxes: This is a [batch_size, max_output_boxes, 4] tensor of data type float32 or float16, containing the coordinates of non-max suppressed boxes. The output coordinates will always be in BoxCorner format, regardless of the input code type.
  • detection_scores: This is a [batch_size, max_output_boxes] tensor of data type float32 or float16, containing the scores for the boxes.
  • detection_classes: This is a [batch_size, max_output_boxes] tensor of data type int32, containing the classes for the boxes.

15~21. FaceDetect / Facial Landmarks Estimation / EmotionNet / Gaze Estimation / GestureNet / HeartRateNet / BodyPoseNet / PoseClassification

  • refer detailed README for how to configure and run the model

22. PeopleNet Transformer

The model has the following two outputs:

  • pred_logits: This is a [batch_size, num_queries, num_classes] tensor of data type float32. The tensor contains probability values of each class.
  • pred_boxes: This is a [batch_size, num_queries, 4] tensor of data type float32. The tensor represents the 2D bounding box coordinates in the format of [center_x, center_y, width, height].

23~24. Re-Identification / Retail Item Recognition

These models are trained to extract the embedding vector from an image. The image is the cropped area of a bounding box from a primary-gie task, like people detection by PeopleNet Transformer or retail item detection by Retail Object Detection. These embedding extraction models are typically arranged as the secondary GIE module in a Deepstream pipeline.

Re-Identification uses ResNet50 backbone.

The output layer is:

  • fc_pred: This is a [batch_size, embedding_size] tensor of data type float32. The tensor contains the embedding vector of size embedding_size = 256.
Retail Item Recognition uses ResNet101 backbone.

The output layer is:

  • outputs: This is a [batch_size, 2048] tensor of data type float32. The tensor contains the embedding vector of size 2048.

FAQ

Measure The Inference Perf

# 1.  Build TensorRT Engine through this smample, for example, build YoloV3 with batch_size=2
./ds-tao -c pgie_yolov3_tao_config.txt -i /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 -b 2
## after this is done, it will generate the TRT engine file under models/$(MODEL), e.g. models/yolov3/ for above command.
# 2. Measure the Inference Perf with trtexec, following above example
cd models/yolov3/
trtexec --batch=2 --useSpinWait --loadEngine=yolo_resnet18.etlt_b2_gpu0_fp16.engine
## then you can find the per *BATCH* inference time in the trtexec output log

About misc folder

# The files in the folder are used by TAO dev blogs:
## 1.  Training State-Of-The-Art Models for Classification and Object Detection with NVIDIA TAO Toolkit
## 2.  Real time vehicle license plate detection and recognition using NVIDIA TAO Toolkit

Others Models

There are some special models which are not exactly detector, classifier or segmetation. The sample application of these special models are put in apps/tao_others. These samples should run on DeepStream 6.1 or above versions. Please refer to apps/tao_others/README.md document for details.

Graph Composer Samples

Some special models needs special deepstream pipeline for running. The deepstream sample graphs for them are put in graphs/tao_others. Please refer to graphs/README.md file for more details.

Known issues

  1. For some yolo models, some layers of the models should use FP32 precision. This is a network characteristics that the accuracy drops rapidly when maximum layers are run in INT8 precision. Please refer the layer-device-precision for more details.
  2. Currently the citySemSegFormer model only supports batch-size 1.

More Repositories

1

torch2trt

An easy to use PyTorch to TensorRT converter
Python
4,547
star
2

jetbot

An educational AI robot based on NVIDIA Jetson Nano.
Jupyter Notebook
3,012
star
3

deepstream_python_apps

DeepStream SDK Python bindings and sample applications
Jupyter Notebook
1,439
star
4

Lidar_AI_Solution

A project demonstrating Lidar related AI solutions, including three GPU accelerated Lidar/camera DL networks (PointPillars, CenterPoint, BEVFusion) and the related libs (cuPCL, 3D SparseConvolution, YUV2RGB, cuOSD,).
Python
1,249
star
5

deepstream_reference_apps

Samples for TensorRT/Deepstream for Tesla & Jetson
C++
1,127
star
6

jetracer

An autonomous AI racecar using NVIDIA Jetson Nano
Jupyter Notebook
1,059
star
7

redtail

Perception and AI components for autonomous mobile robotics.
C++
1,013
star
8

trt_pose

Real-time pose estimation accelerated with NVIDIA TensorRT
Python
974
star
9

tf_trt_models

TensorFlow models accelerated with NVIDIA TensorRT
Python
683
star
10

nanosam

A distilled Segment Anything (SAM) model capable of running real-time with NVIDIA TensorRT
Python
616
star
11

cuPCL

A project demonstrating how to use the libs of cuPCL.
C++
551
star
12

yolo_deepstream

yolo model qat and deploy with deepstream&tensorrt
Python
534
star
13

CUDA-PointPillars

A project demonstrating how to use CUDA-PointPillars to deal with cloud points data from lidar.
Python
525
star
14

tf_to_trt_image_classification

Image classification with NVIDIA TensorRT from TensorFlow models.
Python
454
star
15

jetcam

Easy to use Python camera interface for NVIDIA Jetson
Jupyter Notebook
426
star
16

jetson_benchmarks

Jetson Benchmark
Python
363
star
17

deepstream_360_d_smart_parking_application

Describes the full end to end smart parking application that is available with DeepStream 5.0
JavaScript
340
star
18

deepstream_pose_estimation

This is a DeepStream application to demonstrate a human pose estimation pipeline.
C++
290
star
19

jetson_dla_tutorial

A tutorial for getting started with the Deep Learning Accelerator (DLA) on NVIDIA Jetson
Python
272
star
20

face-mask-detection

Face Mask Detection using NVIDIA Transfer Learning Toolkit (TLT) and DeepStream for COVID-19
Python
243
star
21

nanoowl

A project that optimizes OWL-ViT for real-time inference with NVIDIA TensorRT.
Python
230
star
22

deepstream-occupancy-analytics

This is a sample application for counting people entering/leaving in a building using NVIDIA Deepstream SDK, Transfer Learning Toolkit (TLT), and pre-trained models. This application can be used to build real-time occupancy analytics applications for smart buildings, hospitals, retail, etc. The application is based on deepstream-test5 sample application.
C
217
star
23

tensorrt_plugin_generator

A simple tool that can generate TensorRT plugin code quickly.
Python
215
star
24

jetcard

An SD card image for web programming AI projects with NVIDIA Jetson Nano
Python
210
star
25

trt_pose_hand

Real-time hand pose estimation and gesture classification using TensorRT
Jupyter Notebook
207
star
26

redaction_with_deepstream

An example of using DeepStream SDK for redaction
C
205
star
27

deepstream_lpr_app

Sample app code for LPR deployment on DeepStream
C
203
star
28

jetson-cloudnative-demo

Multi-container demo for Jetson Xavier NX and Jetson AGX Xavier
Shell
186
star
29

cuDLA-samples

YOLOv5 on Orin DLA
Python
177
star
30

jetson-multicamera-pipelines

Python
158
star
31

jetson-intro-to-distillation

A tutorial introducing knowledge distillation as an optimization technique for deployment on NVIDIA Jetson
Python
143
star
32

Gesture-Recognition

Gesture recognition neural network to classify various hand gestures
Python
129
star
33

clip-distillation

Zero-label image classification via OpenCLIP knowledge distillation
Python
104
star
34

ros2_torch_trt

ROS 2 packages for PyTorch and TensorRT for real-time classification and object detection on Jetson Platforms
Python
101
star
35

yolov5_gpu_optimization

This repository provides YOLOV5 GPU optimization sample
Python
100
star
36

Foresee-Navigation

Semantic-Segmentation based autonomous indoor navigation for mobile robots
C++
91
star
37

deepstream_parallel_inference_app

A project demonstrating how to use nvmetamux to run multiple models in parallel.
C++
90
star
38

deepstream_4.x_apps

deepstream 4.x samples to deploy TLT training models
C++
85
star
39

tao-toolkit-triton-apps

Sample app code for deploying TAO Toolkit trained models to Triton
Python
84
star
40

ros2_deepstream

ROS 2 package for NVIDIA DeepStream applications on Jetson Platforms
Python
82
star
41

argus_camera

Simple Python / C++ interface to CSI camera connected to NVIDIA Jetson.
C++
81
star
42

turtlebot3

Autonomous delivery robot with turtlebot3 and Jetson TX2
C++
79
star
43

ros2_jetson

Shell
79
star
44

jetson-copilot

A reference application for a local AI assistant with LLM and RAG
Python
79
star
45

jetson-stereo-depth

Python
78
star
46

my-jetson-nano-baseboard

An open source Jetson Nano baseboard and tools to design your own.
Python
77
star
47

nvidia-tao

Jupyter Notebook
77
star
48

jetnet

Easy to use neural networks for NVIDIA Jetson (and desktop too!)
Python
75
star
49

deepstream_triton_model_deploy

How to deploy open source models using DeepStream and Triton Inference Server
C++
73
star
50

jetson-generative-ai-playground

71
star
51

ros2_tao_pointpillars

ROS2 node for 3D object detection using TAO-PointPillars.
C++
70
star
52

Formula1Epoch

An autonomous R.C. racecar which detects people.
Makefile
66
star
53

ros2_trt_pose

ROS 2 package for "trt_pose": real-time human pose estimation on NVIDIA Jetson Platform
Python
63
star
54

Electron

An autonomous deep learning indoor delivery robot made with Jetson
C++
62
star
55

deepstream_dockers

A project demonstrating how to make DeepStream docker images.
Shell
57
star
56

ros2_jetson_stats

ROS 2 package for monitoring and controlling NVIDIA Jetson Platform resources
Python
56
star
57

isaac_ros_apriltag

CUDA-accelerated Apriltag detection
C++
55
star
58

jetson-trashformers

Autonomous humanoid that picks up and throws away trash
C++
52
star
59

NVIDIA-Optical-Character-Detection-and-Recognition-Solution

This repository provides optical character detection and recognition solution optimized on Nvidia devices.
C++
51
star
60

sdg_pallet_model

A pallet model trained with SDG optimized for NVIDIA Jetson.
Python
48
star
61

JEP_ChatBot

ChatBot: sample for TensorRT inference with a TF model
Python
46
star
62

jetson-min-disk

Shell
45
star
63

whisper_trt

A project that optimizes Whisper for low latency inference using NVIDIA TensorRT
Python
44
star
64

Deepstream-Dewarper-App

This project demonstrate how to infer and track from a 360 videos by using the dewarper plugin.
C
43
star
65

deepstream-retail-analytics

A DeepStream sample application demonstrating end-to-end retail video analytics for brick-and-mortar retail.
C++
42
star
66

isaac_ros_image_pipeline

Isaac ROS image_pipeline package for hardware-accelerated image processing in ROS2.
C++
41
star
67

gesture_recognition_tlt_deepstream

A project demonstrating how to train your own gesture recognition deep learning pipeline. We start with a pre-trained detection model, repurpose it for hand detection using Transfer Learning Toolkit 3.0, and use it together with the purpose-built gesture recognition model. Once trained, we deploy this model on NVIDIA® Jetson™ using Deepstream SDK.
C
40
star
68

synthetic_data_generation_training_workflow

Workflow for generating synthetic data and training CV models.
Jupyter Notebook
38
star
69

YOLOv5-with-Isaac-ROS

Sample showing how to use YOLOv5 with Nvidia Isaac ROS DNN Inference
Python
38
star
70

retinanet_for_redaction_with_deepstream

This sample shows how to train and deploy a deep learning model for the real time redaction of faces from video streams using the NVIDIA DeepStream SDK
C
37
star
71

scene-text-recognition

Python
34
star
72

deep_nav_layers

A series of plugins to the ROS navigation stack to incorporate deep learning inputs.
Makefile
33
star
73

Nav2-with-Isaac-ROS-GEMs

Python
33
star
74

tao_toolkit_recipes

Jupyter Notebook
32
star
75

GreenMachine

AI kiosk with a camera and a projector to visualize waste type of cafeteria objects
Python
32
star
76

viz_3Dbbox_ros2_pointpillars

Visualization tool for 3D bounding box results of TAO-PointPillars
Python
28
star
77

isaac_demo

Set of demo to try Isaac ROS with Isaac SIM
Python
27
star
78

tlt-iva-examples

A notebook that demonstrates how to use the NVIDIA Intelligent Video Analytics suite to detect objects in real-time. We use Transfer Learning Toolkit to train a fast and accurate detector and DeepStream to run that detector on an NVIDIA Jetson edge device.
Jupyter Notebook
27
star
79

mmj_genai

A reference example for integrating NanoOwl with Metropolis Microservices for Jetson
Python
25
star
80

TAO-Toolkit-Whitepaper-use-cases

TAO best practices. How to adapt for a new domain, new classes, and generalize the model with a small dataset using Nvidia's TAO toolkit
Jupyter Notebook
24
star
81

ros2_nanollm

ROS2 nodes for LLM, VLM, VLA
Python
24
star
82

caffe_ros

Package containing nodes for deep learning in ROS.
C++
23
star
83

jetson_isaac_ros_visual_slam_tutorial

Hosting a tutorial documentation for running Isaac ROS Visual SLAM on Jetson device.
23
star
84

jetbot_mini

Python
22
star
85

centernet_kinect

Real-time CenterNet based object detection on fused IR/Depth images from Kinect sensor. Works on NVIDIA Jetson.
Python
19
star
86

deepstream_libraries

DeepStream Libraries offer CVCUDA, NvImageCodec, and PyNvVideoCodec modules as Python APIs for seamless integration into custom frameworks.
Python
19
star
87

robot_freespace_seg_Isaac_TAO

In this workflow we demonstrate using SDG + TAO for a freespace segmentation application
Python
17
star
88

deepstream-yolo3-gige-apps

A project demonstration on how to use the GigE camera to do the DeepStream Yolo3 object detection, how to set up the GigE camera, and deployment for the DeepStream apps.
C
16
star
89

ros2_torch2trt_examples

ros2 packages for torch2trt examples
Python
15
star
90

ros2_trt_pose_hand

ROS2 package for trt_pos_hand, "Real-time hand pose estimation and gesture classification using TensorRT"
Python
14
star
91

deepstream_triton_migration

Triton Migration Guide for DeepStreamSDK.
14
star
92

ROS2-NanoOWL

ROS 2 node for open-vocabulary object detection using NanoOWL.
Python
14
star
93

jetson-platform-services

A collection of reference AI microservices and workflows for Jetson Platform Services
Jupyter Notebook
13
star
94

jetson_virtual_touchpanel

Enables Jetson to be controlled with handpose using trt_pose
Python
12
star
95

deepstream-segmentation-analytics

A project demonstration to do the industrial defect segmentation based on loading the image from directory and generate the output ground truth.
C
11
star
96

isaac_ros_common

Isaac ROS common utilities, Dockerfiles, and testing code.
Python
11
star
97

tao_byom_examples

Examples of converting different open-source deep learning models to TAO compatible format through TAO BYOM package.
Python
11
star
98

husky_demo

Husky Simulation and Hardware In the Loop simulation on Isaac SIM with Isaac ROS
Python
10
star
99

mmj_utils

A utility library to help integrate Python applications with Metropolis Microservices for Jetson
Python
9
star
100

a2j_handpose_3d

Python
8
star