BUG REPORT: deepstream5.1' nvinfer Caught SIGSEGV

• Hardware Platform (Jetson / GPU) GPU Tesla T4.
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only) –
• TensorRT Version 7.2.2
• NVIDIA GPU Driver Version (valid for GPU only) 460.91.03
• Issue Type( questions, new requirements, bugs) bug

Approbation of a custom secondary classifier model without a custom parsing library causes Caught SIGSEGV

The error was discovered by launching the following pipeline with the custom secondary classifier model A:

gst-launch-1.0 filesrc location="/root/barrier_special_raw.mp4" ! qtdemux ! h264parse ! "video/x-h264,stream-format=byte-stream" ! nvv4l2decoder ! m2.sink_0 nvstreammux name=m2 width=1920 height=1080 batch-size=1 batched-push-timeout=4000000 ! nvinfer config-file-path="/opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app/config_infer_primary.txt" ! nvtracker tracker-width=480 tracker-height=272 gpu_id=0 ll-lib-file="/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_iou.so" ! nvinfer config-file-path="/root/models/A/A_ds_config.txt" ! fakesink

After some investigation it was determined that this error is caused by the second nvinfer and emerges in its gstnvinfer_meta_utils.cpp module in void merge_classification_output (GstNvInferObjectHistory & history, GstNvInferObjectInfo &new_result) function because of the unconditional strdup at line 207:

attr.attributeLabel = strdup(attr.attributeLabel);

Somehow this error appears only if the nvtracker is presented in the pipeline and vanishes when it is removed.
Recompiling the nvinfer plugin with the following patch fixes the issue:

--- gstnvinfer_meta_utils.cpp   2022-04-06 22:15:34.878814358 +0300
+++ gstnvinfer_meta_utils_new.cpp       2022-04-06 22:07:04.661095894 +0300
@@ -277,7 +277,9 @@
   history.cached_info.attributes.assign (new_result.attributes.begin (),
       new_result.attributes.end ());
   for (auto &attr : history.cached_info.attributes) {
-    attr.attributeLabel = strdup(attr.attributeLabel);
+    if (attr.attributeLabel) {
+      attr.attributeLabel = strdup(attr.attributeLabel);
+    }
   history.cached_info.label.assign (new_result.label);

The error is also absent when the custom parsing library is applied for the A model. But still though it is inconvenient to test a custom model applicability/launch-ability when a custom parsing library is not written yet.
The mentioned A model here is the custom secondary classifier that cannot be attached or reported because of the NDA, but here are its input/output parameters:

Input layer: [3;224;224]        // RGB image 224x224
Output layer 0: [5]                 // 5 classes
Output layer 1: [6]                 // 6 classes

Suggesting to apply the mentioned above patch permanently in the future DeepStream releases.

The bug is fixed in the latest DeepStream version. Thank you for the suggestion!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.