Please provide complete information as applicable to your setup.
• Hardware Platform : Jetson Orin Nx
• DeepStream Version 7.0
• JetPack Version 6.0
• TensorRT Version 8.6.2
• Issue Type-question
• How to reproduce the issue: using a custom yolov4tiny.onnx model on DS7.0 that worked on DS6.4
I have recently migrated my application from DeepStream 6.4 to DeepStream 7.0 on JetPack 6.0, utilizing a Jetson Orin NX platform. While the migration process was successful, I’ve encountered an issue with my YOLOv4-Tiny model.
Issue Details:
• Model Behavior: After converting my YOLOv4-Tiny ONNX model to a TensorRT engine using DeepStream 7.0, the inference results display an unusually high number of detected objects, all with a confidence score of 1.0. This behavior was not present when using the same model in DeepStream 6.4.
• Configuration Consistency: The configuration file used for inference remains unchanged from the previous setup:
[property]
gpu-id=0
net-scale-factor=1
model-color-format=0 # 0: RGB, 1: BGR
labelfile-path=filter_classes.txt
model-engine-file=filter_recognition_epoch_80.onnx_b1_gpu0_fp32.engine
onnx-file=filter_recognition_epoch_80.onnx
infer-dims=3;416;416
maintain-aspect-ratio=1
uff-input-order=0
uff-input-blob-name=Input
batch-size=1
network-mode=0 # 0: FP32, 1: INT8, 2: FP16
num-detected-classes=2
interval=0
gie-unique-id=1
is-classifier=0
cluster-mode=3
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=libnvds_infercustomparser_tao.so
[class-attrs-all]
pre-cluster-threshold=0.5
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
Custom Parser: The post-processing library (libnvds_infercustomparser_tao.so) was recompiled in the DeepStream 7.0 environment to ensure compatibility.
Observations:
• The same ONNX model, when converted to a TensorRT engine in DeepStream 6.4, produced accurate detections.
• In DeepStream 7.0, the model detects an excessive number of objects, all with a confidence score of 1.0, on the same sample video.
I am seeking insights into the following:
1. ONNX Model Compatibility: Are there known compatibility issues when using ONNX models between DeepStream versions 6.4 and 7.0? Should an ONNX model function consistently across these versions?
2. Engine Conversion Process: Given that I am not utilizing INT8 precision (thus avoiding the need for a calibration file), should the ONNX to TensorRT engine conversion process yield consistent and accurate results across DeepStream versions?
3. Potential Causes and Solutions: What factors could contribute to the observed behavior of excessive detections with maximum confidence scores? Are there specific configuration parameters or steps I should revisit to ensure accurate detection results in DeepStream 7.0?
Any guidance or recommendations to address this issue would be greatly appreciated.
Thank you in advance for your assistance.