• Stars
    star
    1,448
  • Rank 32,489 (Top 0.7 %)
  • Language
    C++
  • License
    MIT License
  • Created over 4 years ago
  • Updated 5 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models

DeepStream-Yolo

NVIDIA DeepStream SDK 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 configuration for YOLO models


Important: please export the ONNX model with the new export file, generate the TensorRT engine again with the updated files, and use the new config_infer_primary file according to your model


Future updates

  • DeepStream tutorials
  • Updated INT8 calibration
  • Support for segmentation models
  • Support for classification models

Improvements on this repository

  • Support for INT8 calibration
  • Support for non square models
  • Models benchmarks
  • Support for Darknet models (YOLOv4, etc) using cfg and weights conversion with GPU post-processing
  • Support for YOLO-NAS, PPYOLOE+, PPYOLOE, DAMO-YOLO, YOLOX, YOLOR, YOLOv8, YOLOv7, YOLOv6 and YOLOv5 using ONNX conversion with GPU post-processing
  • GPU bbox parser (it is slightly slower than CPU bbox parser on V100 GPU tests)
  • Support for DeepStream 5.1
  • Custom ONNX model parser (NvDsInferYoloCudaEngineGet)
  • Dynamic batch-size for Darknet and ONNX exported models
  • INT8 calibration (PTQ) for Darknet and ONNX exported models
  • New output structure (fix wrong output on DeepStream < 6.2) - it need to export the ONNX model with the new export file, generate the TensorRT engine again with the updated files, and use the new config_infer_primary file according to your model

Getting started

Requirements

DeepStream 6.2 on x86 platform

DeepStream 6.1.1 on x86 platform

DeepStream 6.1 on x86 platform

DeepStream 6.0.1 / 6.0 on x86 platform

DeepStream 5.1 on x86 platform

DeepStream 6.2 on Jetson platform

DeepStream 6.1.1 on Jetson platform

DeepStream 6.1 on Jetson platform

DeepStream 6.0.1 / 6.0 on Jetson platform

DeepStream 5.1 on Jetson platform

Suported models

Basic usage

1. Download the repo

git clone https://github.com/marcoslucianops/DeepStream-Yolo.git
cd DeepStream-Yolo

2. Download the cfg and weights files from Darknet repo to the DeepStream-Yolo folder

3. Compile the lib

  • DeepStream 6.2 on x86 platform

    CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo
    
  • DeepStream 6.1.1 on x86 platform

    CUDA_VER=11.7 make -C nvdsinfer_custom_impl_Yolo
    
  • DeepStream 6.1 on x86 platform

    CUDA_VER=11.6 make -C nvdsinfer_custom_impl_Yolo
    
  • DeepStream 6.0.1 / 6.0 on x86 platform

    CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
    
  • DeepStream 5.1 on x86 platform

    CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
    
  • DeepStream 6.2 / 6.1.1 / 6.1 on Jetson platform

    CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
    
  • DeepStream 6.0.1 / 6.0 / 5.1 on Jetson platform

    CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
    

4. Edit the config_infer_primary.txt file according to your model (example for YOLOv4)

[property]
...
custom-network-config=yolov4.cfg
model-file=yolov4.weights
...

NOTE: By default, the dynamic batch-size is set. To use implicit batch-size, uncomment the line

...
force-implicit-batch-dim=1
...

5. Run

deepstream-app -c deepstream_app_config.txt

NOTE: The TensorRT engine file may take a very long time to generate (sometimes more than 10 minutes).

NOTE: If you want to use YOLOv2 or YOLOv2-Tiny models, change the deepstream_app_config.txt file before run it

...
[primary-gie]
...
config-file=config_infer_primary_yoloV2.txt
...

Docker usage

  • x86 platform

    nvcr.io/nvidia/deepstream:6.2-devel
    nvcr.io/nvidia/deepstream:6.2-triton
    
  • Jetson platform

    nvcr.io/nvidia/deepstream-l4t:6.2-samples
    nvcr.io/nvidia/deepstream-l4t:6.2-triton
    

    NOTE: To compile the nvdsinfer_custom_impl_Yolo, you need to install the g++ inside the container

    apt-get install build-essential
    

    NOTE: With DeepStream 6.2, the docker containers do not package libraries necessary for certain multimedia operations like audio data parsing, CPU decode, and CPU encode. This change could affect processing certain video streams/files like mp4 that include audio track. Please run the below script inside the docker images to install additional packages that might be necessary to use all of the DeepStreamSDK features:

    /opt/nvidia/deepstream/deepstream/user_additional_install.sh
    

NMS Configuration

To change the nms-iou-threshold, pre-cluster-threshold and topk values, modify the config_infer file

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

NOTE: Make sure to set cluster-mode=2 in the config_infer file.

Extract metadata

You can get metadata from DeepStream using Python and C/C++. For C/C++, you can edit the deepstream-app or deepstream-test codes. For Python, your can install and edit deepstream_python_apps.

Basically, you need manipulate the NvDsObjectMeta (Python / C/C++) and NvDsFrameMeta (Python / C/C++) to get the label, position, etc. of bboxes.

My projects: https://www.youtube.com/MarcosLucianoTV