Sample For Car License Recognization
Description
This sample is to show how to use graded models for detection and classification with DeepStream SDK version not less than 5.0.1. The models in this sample are all TAO3.0 models.
PGIE(car detection) -> SGIE(car license plate detection) -> SGIE(car license plate recognization)
This pipeline is based on three TAO models below
- Car detection model https://ngc.nvidia.com/catalog/models/nvidia:tao:trafficcamnet
- LPD (car license plate detection) model https://ngc.nvidia.com/catalog/models/nvidia:tao:lpdnet
- LPR (car license plate recognization/text extraction) model https://ngc.nvidia.com/catalog/models/nvidia:tao:lprnet
More details for TAO3.0 LPD and LPR models and TAO training, please refer to TAO document.
Performance
Below table shows the end-to-end performance of processing 1080p videos with this sample application.
Device | Number of streams | Batch Size | Total FPS |
---|---|---|---|
Jetson Nano | 1 | 1 | 9.2 |
Jetson NX | 3 | 3 | 80.31 |
Jetson Xavier | 5 | 5 | 146.43 |
Jetson Orin | 5 | 5 | 341.65 |
T4 | 14 | 14 | 447.15 |
Prerequisition
-
Make sure deepstream-test1 sample can run successful to verify your DeepStream installation
-
Download x86 or Jetson tao-converter which is compatible to your platform from the links in https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/resources/tao-converter/version.
-
The LPR sample application can work as Triton client on x86 platforms.
Download
- Download Project with SSH or HTTPS
// SSH
git clone [email protected]:NVIDIA-AI-IOT/deepstream_lpr_app.git
// or HTTPS
git clone https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app.git
- Prepare Models
All models can be downloaded with the following commands:
cd deepstream_lpr_app/
For US car plate recognition
./download_convert.sh us 0 #if DeepStream SDK 5.0.1, use ./download_convert.sh us 1
For Chinese car plate recognition
./download_convert.sh ch 0 #if DeepStream SDK 5.0.1, use ./download_convert.sh ch 1
Prepare Triton Server
From DeepStream 6.1, LPR sample application supports three inferencing modes:
- gst-nvinfer inferencing based on TensorRT
- gst-nvinferserver inferencing as Triton CAPI client(only for x86)
- gst-nvinferserver inferencing as Triton gRPC client(only for x86)
The following instructions are only needed for the LPR sample application working with gst-nvinferserver inferencing on x86 platforms as the Triton client. For LPR sample application works with nvinfer mode, please go to Build and Run part directly.
The Triton Inference Server libraries are required to be installed if the DeepStream LPR sample application should work as the Triton client, the Triton client document instructs how to install the necessary libraries. A easier way is to run DeepStream application in the DeepStream Triton container.
-
Setting up Triton Inference Server for native cAPI inferencing, please refer to triton_server.md.
-
Setting up Triton Inference Server for gRPC inferencing, please refer to triton_server_grpc.md.
Build and Run
make
cd deepstream-lpr-app
For US car plate recognition
cp dict_us.txt dict.txt
For Chinese car plate recognition
cp dict_ch.txt dict.txt
Start to run the application
./deepstream-lpr-app <1:US car plate model|2: Chinese car plate model> \
<1: output as h264 file| 2:fakesink 3:display output> <0:ROI disable|1:ROI enable> <infer|triton|tritongrpc> \
<input mp4 file name> ... <input mp4 file name> <output file name>
Or run with YAML config file.
./deepstream-lpr-app <app YAML config file>
Samples
- Application works with nvinfer
A sample of US car plate recognition:
./deepstream-lpr-app 1 2 0 infer us_car_test2.mp4 us_car_test2.mp4 output.264
Or run with YAML config file.
./deepstream-lpr-app lpr_app_infer_us_config.yml
A sample of Chinese car plate recognition:
./deepstream-lpr-app 2 2 0 infer ch_car_test.mp4 ch_car_test.mp4 output.264
- Application works with nvinferserver(Triton native samples)
A sample of US car plate recognition:
./deepstream-lpr-app 1 2 0 triton us_car_test2.mp4 us_car_test2.mp4 output.264
Or run with YAML config file after modify triton part in yml file.
./deepstream-lpr-app lpr_app_triton_us_config.yml
A sample of Chinese car plate recognition:
./deepstream-lpr-app 2 2 0 triton ch_car_test2.mp4 ch_car_test2.mp4 output.264
Or run with YAML config file after modify triton part in yml file.
./deepstream-lpr-app lpr_app_triton_ch_config.yml
- Application works with nvinferserver(Triton gRPC samples)
A sample of US car plate recognition:
./deepstream-lpr-app 1 2 0 tritongrpc us_car_test2.mp4 us_car_test2.mp4 output.264
Or run with YAML config file after modify triton part in yml file.
./deepstream-lpr-app lpr_app_tritongrpc_us_config.yml
A sample of Chinese car plate recognition:
./deepstream-lpr-app 2 2 0 tritongrpc ch_car_test2.mp4 ch_car_test2.mp4 output.264
Or run with YAML config file after modify triton part in yml file.
./deepstream-lpr-app lpr_app_tritongrpc_ch_config.yml
Notice
- This sample application only support mp4 files which contain H264 videos as input files.
- For Chinese plate recognition, please make sure the OS supports Chinese language.
- The second argument of deepstream-lpr-app should be 2(fakesink) for performance test.
- The trafficcamnet and LPD models are all INT8 models, the LPR model is FP16 model.
- There is a bug for Triton gprc mode: the first two character can't be recognized.
- For some yolo models, some layers of the models should use FP32 precision. This is a network characteristics that the accuracy drops rapidly when maximum layers are run in INT8 precision. Please refer the layer-device-precision for more details.