Details of Setup
Hardware Platform (Jetson / GPU): GPU
DeepStream Version: 7.1.0
TensorRT Version: 10.7.0.23-1+cuda12.6 amd64
NVIDIA GPU Driver Version (valid for GPU only): 560.35.03 (CUDA Version: 12.6)
Environment:
DeepStream 7.1
YOLOv11 custom parser compiled for NVIDIA TensorRT
Ubuntu 22.04 LTS, Dockerized deployment
Issue Type: Bug
Description of Issue:
I’m encountering an issue while using two YOLOv11 models (one for vehicle detection and another for license plate recognition) within DeepStream 7.1. Both models perform well when tested outside of DeepStream (using the same ONNX files), but only the vehicle detection model displays bounding boxes correctly within the application. The license plate model fails to show any detections despite similar configurations.
config_infer_primary_yoloV11_lp_V1.txt (1.0 KB)
config_infer_primary_yoloV11_vehicle.txt (1003 Bytes)
deepstream_app_config.txt (954 Bytes)
What can cause this issue, how to debug and fix it?
yuweiw
February 27, 2025, 1:55am
3
There is only one [primary-gie]
in your deepstream config file.
And about your license plate recognition
model, is it just to detect the license plate or recognize the specific number of the license plate?
Could you attach the result of the outside of DeepStream
mode?
Yes, I simplified the setup, because it wasn’t detecting the license plates as a secondary GIE either. It has just primary GIE now for testing.
After training it was exported as:
yolo export model=trained_models/best.pt format=onnx imgsz=640,640
Its shape:
File name: trained_models/best.onnx
Model: main_graph
Inputs:
Input 0: images, Type: 1, Shape: [dim_value: 1
, dim_value: 3
, dim_value: 640
, dim_value: 640
]
Outputs:
Output 0: output0, Type: 1, Shape: [dim_value: 1
, dim_value: 5
, dim_value: 8400
]
Tested and working when ran with:
yolo predict model=trained_models/best.onnx source=video.mp4 imgsz=640 task=detect save=True project=predict name=video_with_boxes
But getting no bounding boxes when run in DeepStream:
Just to detect the license plate.
lqdisme
February 27, 2025, 9:44am
5
How was the ONNX model exported before being serialized into an engine for the Deepstream pipeline? Are you using the same ONXN model for both Ultralytics and DeepStream? You can refer to the repository below for the correct way to generate the ONNX model quangdungluong/DeepStream-YOLOv11: Plug-and-Play Custom Parsers for AI Models in NVIDIA DeepStream SDK. Supported YOLOv11 model.
I tried the suggested repo for ONNX conversion, but running
scripts/compile_nvdsinfer.sh
failed with my setup:
make: *** [Makefile:60: libnvds_infercustomparser_yolo.so] Error 1
And without it the output was:
This repo worked perfectly for me:
NVIDIA DeepStream SDK 7.1 / 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
Using it’s exportV8 script with the YOLO V11 pt model the exported ONNX worked in my Deepstream setup.
python3 export_yoloV8.py -w best.pt --opset 12 --simplify
Thank you for your help and pointing me in the right direction!
According to my research, the results detected by YOLO-onnx and deepstream-tensorrt for the same image are very different because the scaling algorithms of deepstream and opencv yield different results (Image comparison Deepstream vs opencv python ). Therefore, I have customized the preprocess plugin to support scaling methods using opencv. You can try it out at the repo (GitHub - hieptran2k2/DeepStream_Custom_Preprocess_Plugin: Create a custom preprocessing plugin in DeepStream using OpenCV with a scaling filter. ). If you find the repo useful, please give me a star! Additionally, you can refer to the export yolov11 nms custom in the repo GitHub - hieptran2k2/DeepStream-Yolo-BBox-Add-Nms: NVIDIA DeepStream SDK 7.1/ 7.0/ 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 application for YOLO-BBox models .
1 Like