nvinfer is not populating 'confidence' field in NvDsObjectMeta (DS 4.0)

I’m stumped on this, been through all the developer and plug-in docs, and can’t find any reason why this particular metadata item is missing. Is there a bug?

I am working off the deepstream-app’s source code in DS 4.0 and added some more g_print() console outputs to look at the metadata that the nvinfer plugin is attaching after PGIE. The attached code below (all_bbox_generated) is called from gie_processing_done_buf_prob() in deepstream-app.c. This is in turn linked to the sink side of the nvosd plug-in (trace the code back up through create_processing_instance(), for example). And my debugging output is showing that detected objects have ROI metadata, class metadata, etc. BUT, the confidence value is always 0:

(source cam = 0) detected class 0 (Car) with conf 0.00000000 @ (195, 104, 158, 68)
(Display text: Car)
(source cam = 0) detected class 0 (Car) with conf 0.00000000 @ (237, 106, 154, 66)
(Display text: Car)
(source cam = 0) detected class 0 (Car) with conf 0.00000000 @ (276, 106, 145, 66)
(Display text: Car)
(source cam = 0) detected class 0 (Car) with conf 0.00000000 @ (316, 108, 126, 64)
(Display text: Car)

Device: Jetson Nano. Latest JetPack/etc - did a completely new system image once DS4.0 was released, so it’s a fresh install.

/**
 * Callback function to be called once all inferences (Primary + Secondary)
 * are done. This is opportunity to modify content of the metadata.
 * e.g. Here Person is being replaced with Man/Woman and corresponding counts
 * are being maintained. It should be modified according to network classes
 * or can be removed altogether if not required.
 */
static void
all_bbox_generated (AppCtx * appCtx, GstBuffer * buf,
    NvDsBatchMeta * batch_meta, guint index)
{
  for (NvDsMetaList * l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
    NvDsFrameMeta *frame_meta = l_frame->data;
    if (frame_meta == NULL)
    {
        continue;
    }

    if (frame_meta->bInferDone == 0)
    {
        continue;
    }

    for (NvDsMetaList * l_obj = frame_meta->obj_meta_list; l_obj != NULL;
        l_obj = l_obj->next) {
      NvDsObjectMeta *obj = (NvDsObjectMeta *) l_obj->data;

      if (obj == NULL)
      {
          continue;
      }

      gfloat l_probability = obj->confidence;

      // Try to find label's probability this way?  (but sadly it's not populated here either)
      for (NvDsClassifierMetaList *c_meta = obj->classifier_meta_list; c_meta != NULL;
           c_meta = c_meta->next)
      {
          NvDsClassifierMeta *cm = (NvDsClassifierMeta *)c_meta->data;
          if (cm == NULL)
          {
              continue;
          }

          for (NvDsLabelInfoList *li_meta = cm->label_info_list; li_meta != NULL;
            li_meta = li_meta->next)
          {
              NvDsLabelInfo *label_info = (NvDsLabelInfo *)li_meta->data;
              if (label_info == NULL)
              {
                  continue;
              }

              g_print("found label conf = %.8f\n", label_info->result_prob);
              l_probability = label_info->result_prob;
          }
      }

     if (obj->unique_component_id ==
        (gint) appCtx->config.primary_gie_config.unique_id) {  // was primary_gie_config
        // Print out info about the object detected
        g_print ("(source cam = %d) detected class %d (%s) with conf %.8f @ (%d, %d, %d, %d)\n(Display text: %s)\n", 
                 frame_meta->source_id, 
                 obj->class_id, obj->obj_label, l_probability, 
                 obj->rect_params.left, obj->rect_params.top, obj->rect_params.width, obj->rect_params.height,
                 obj->text_params.display_text);
      }
    }
  }
}

Relevant part of main config file:

config-file property is mandatory for any gie section.

Other properties are optional and if set will override the properties set in

the infer config file.

[primary-gie]
enable=1
gpu-id=0
model-engine-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine
batch-size=3
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1

interval was 4 frames

interval=2
gie-unique-id=1
nvbuf-memory-type=0
config-file=pgie_primary_nano_config.txt

And the contents of pgie_primary_nano_config.txt:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector_Nano/resnet10.caffemodel
proto-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector_Nano/resnet10.prototxt
model-engine-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector_Nano/labels.txt
batch-size=8
process-mode=1
model-color-format=0

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#parse-bbox-func-name=NvDsInferParseCustomResnet
#custom-lib-path=/path/to/libnvdsparsebbox.so
#enable-dbscan=1

I added the following two:

classifier-async-mode=0

Only operate on the Car and Person classes

operate-on-class-ids=0;2

[class-attrs-all]
threshold=0.6
group-threshold=1

Set eps=0.7 and minBoxes for enable-dbscan=1

eps=0.2
#minBoxes=3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

1 Like

Hi,

Have you checked this file:
${deepstream-4.0}/sources/libs/nvdsinfer_customparser/nvdsinfer_custombboxparser.cpp

The confidence value is located at the object.detectionConfidence.
Thanks.

Thanks for the pointer. I compiled that library and have updated the config file for the PGIE to use that function to parse the bbox. But I’m not clear how to access that specific metadata object from the deepstream-app code where I am processing metadata as shown above (via NvDsBatchMeta iteration). The custom parsing function is writing back to NvDsInferObjectDetectionInfo, but I have no idea where to grab that object in the metadata passed around in GST between plug-ins.

For example, I don’t see a function I can call in the API docs (NVIDIA DeepStream SDK API Reference: NvDsInfer API).

How can I access it?

And why doesn’t the default parser implemented in the SDK populate the confidence field, since it populates ROI and other fields? Seems strange to have to use a ‘custom’ parser function for a field that is so basic to inference? Perhaps there is a good reason I’m not understanding.

Joe

Hi,

I have the same problem. @joe-dev-11 Did you figure out anyway to fix it??

Thanks.

I haven’t. I’m waiting for an answer from Nvidia.

They show grabbing confidence from NvDsObjectMeta in osd_sink_pad_buffer_probe() in deepstream_test4_app.c, and that function is added as a sink on the OSD plug-in. I have been trying to grab the same field and coming up with zero every time, as discussed above. If the requirement is that you HAVE to use the OSD plug-in to get confidence in the metadata, that would be bizarre requirement, especially since Nvidia turns off OSD for its actual performance stats and I don’t plan to use OSD in my final pipeline.

I’m sure because DS 4.0 is brand new, the Doc team has more things to add, but would be great to get a clear answer to this thread.

I have the same problem of zero confidence numbers. I found that in nvdsinfer_context_impl_output_parsing.cpp for the nvinfer plugin, the fillDetectionOutput function calls either cluserAndFillDetectionOutputDBSCAN or clusterAndFillDetectionOutputCV after inferencing and both do not populate the confidence levels for the generated NvDsInferObjects created using the detected objects. I have yet to change the code and test it out but I think this might be where the code needs to be tweaked a little to get the missing confidence numbers.

Thanks a lot. Let me know how it goes.

Hi KuanYong.

Any Updates ?? I’ve tried what you suggested but still nothing. See https://devtalk.nvidia.com/default/topic/1060849/deepstream-sdk/deepstream-v4-zero-confidence-problem/ .

Inside /opt/nvidia/deepstream/deepstream-4.0/sources/gst-plugins/gst-nvinfer/gstnvinfer_meta_utils.cpp, there is a function called attach_metadata_detector() which assigns obj_meta->confidence = 0.0;

So you could modify the structure to add a new field to store confidence values, then change the line to assign the confidence value to obj_meta, then recompile nvdsinfer and gst-nvinfer.

/opt/nvidia/deepstream/deepstream-4.0/sources/includes/nvdsinfer_context.h:

/**
 * Holds the information about one detected object.
 */
typedef struct
{
    /** Offset from the left boundary of the frame. */
    unsigned int left;
    /** Offset from the top boundary of the frame. */
    unsigned int top;
    /** Object width. */
    unsigned int width;
    /** Object height. */
    unsigned int height;
    /* confidence value for the object class. */
    float confidence;
    /* Index for the object class. */
    int classIndex;
    /* String label for the detected object. */
    char *label;
} NvDsInferObject;

/opt/nvidia/deepstream/deepstream-4.0/sources/libs/nvdsinfer/nvdsinfer_context_impl_output_parsing.cpp:

/**
 * Cluster objects using OpenCV groupRectangles and fill the output structure.
 */
void
NvDsInferContextImpl::clusterAndFillDetectionOutputCV(NvDsInferDetectionOutput &output)
{
...
    for (unsigned int c = 0; c < m_NumDetectedClasses; c++)
    {
        /* Add coordinates and class ID and the label of all objects
         * detected in the frame to the frame output. */
        for (auto & rect:m_PerClassCvRectList[c])
        {
            NvDsInferObject &object = output.objects[output.numObjects];
            object.left = rect.x;
            object.top = rect.y;
            object.width = rect.width;
            object.height = rect.height;
            object.classIndex = c;
            object.label = nullptr;
            object.confidence = 0.6; // derive your confidence value from somewhere
            if (c < m_Labels.size() && m_Labels[c].size() > 0)
                object.label = strdup(m_Labels[c][0].c_str());
            output.numObjects++;
        }
    }
...
}

/opt/nvidia/deepstream/deepstream-4.0/sources/gst-plugins/gst-nvinfer/gstnvinfer_meta_utils.cpp:

/**
 * Attach metadata for the detector. We will be adding a new metadata.
 */
void
attach_metadata_detector (GstNvInfer * nvinfer, GstMiniObject * tensor_out_object,
    GstNvInferFrame & frame, NvDsInferDetectionOutput & detection_output)
{
...
  for (guint i = 0; i < detection_output.numObjects; i++) {
    NvDsInferObject & obj = detection_output.objects[i];
...
    obj_meta = nvds_acquire_obj_meta_from_pool (batch_meta);

    obj_meta->unique_component_id = nvinfer->unique_id;
    // obj_meta->confidence = 0.0;
    obj_meta->confidence = obj.confidence;
...
   }
...
}

Thanks @KuanYong. I’ve already done exactly what you said BUT it didn’t work.

I’ve tried also to set the values of object.left, object.top, object.width, object.height to zero in “NvDsInferContextImpl::clusterAndFillDetectionOutputCV” function just to see if the information of NvIneferObject will be populated (it didn’t).
I rebuild everything.
I still got non null bbox detections (which doesn’t suppose to happen).
This means that “NvDsInferContextImpl::clusterAndFillDetectionOutputCV” doesn’t do anything.
I even tried modifying “NvDsInferContextImpl::clusterAndFillDetectionDBSCAN” function and still got the same results.

Hi @anassamar8,

I am using the objectDetector_Yolo sample custom parser and with the changes I posted, I was able to get print outs of 0.6 as the confidence level for my detected objects so it must have been something else you missed. Maybe you could check that the correct .so files are being used by gstreamer.
I made the changes to the files in their original locations and ran sudo make clean and sudo make install in both the folders and it worked for me.

I didn’t care about grouping the rectangles and I could get all the confidence numbers by changing clusterAndFillDetectionOutputCV() to the following…

/opt/nvidia/deepstream/deepstream-4.0/sources/libs/nvdsinfer/nvdsinfer_context_impl_output_parsing.cpp

/**
 * Cluster objects using OpenCV groupRectangles and fill the output structure.
 */
void
NvDsInferContextImpl::clusterAndFillDetectionOutputCV(NvDsInferDetectionOutput &output)
{
    output.objects = new NvDsInferObject[m_ObjectList.size()];
    output.numObjects = 0;

    for (unsigned int i = 0; i < m_ObjectList.size(); i++)
    {

	    NvDsInferObject &object = output.objects[i];
	    object.left = m_ObjectList[i].left;
	    object.top = m_ObjectList[i].top;
	    object.width = m_ObjectList[i].width;
	    object.height = m_ObjectList[i].height;
	    object.classIndex = m_ObjectList[i].classId;
	    object.confidence = m_ObjectList[i].detectionConfidence;
	    int c = object.classIndex;
	    object.label = nullptr;
	    if (c < m_Labels.size() && m_Labels[c].size() > 0)
		object.label = strdup(m_Labels[c][0].c_str());
	    output.numObjects++;
    }
}

My Output:
ubuntu@ubuntu-desktop:~/workspace/testrun$ ./deepstream-app

Using winsys: x11
Creating LL OSD context new
Deserialize yoloLayerV3 plugin: yolo_17
Deserialize yoloLayerV3 plugin: yolo_24
Deserialize yoloLayerV3 plugin: yolo_17
Deserialize yoloLayerV3 plugin: yolo_24
Deserialize yoloLayerV3 plugin: yolo_17
Deserialize yoloLayerV3 plugin: yolo_24

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:163>: Pipeline ready

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:149>: Pipeline running

Creating LL OSD context new
id: -1 label[0]: car conf: 0.919336 bbox: [829 252 1035 367]
id: -1 label[0]: carplate conf: 0.974722 bbox: [856 329 876 336]
id: -1 label[0]: car conf: 0.964990 bbox: [829 250 1035 367]
id: -1 label[0]: car conf: 0.852407 bbox: [654 275 780 337]
id: -1 label[0]: carplate conf: 0.995340 bbox: [852 329 874 337]
id: -1 label[0]: car conf: 0.991833 bbox: [822 250 1042 365]
id: -1 label[0]: car conf: 0.850882 bbox: [355 269 481 349]
id: -1 label[0]: car conf: 0.799025 bbox: [664 273 784 335]
id: -1 label[0]: carplate conf: 0.999500 bbox: [850 329 872 338]
id: -1 label[0]: car conf: 0.962592 bbox: [818 254 1034 365]

Good luck!

1 Like

Thanks a lot KuanYong. It finally worked.

For anyone interested, it is possible to group the rectangles while keeping confidence values by using the cv::HOGDescriptor::groupRectangles function.

Here is my code :

void
NvDsInferContextImpl::clusterAndFillDetectionOutputCV(NvDsInferDetectionOutput &output)
{
    size_t totalObjects = 0;

    for (auto & list:m_PerClassCvRectList)
        list.clear();

    std::vector<std::vector<double>> confidences(m_NumDetectedClasses);

    /* The above functions will add all objects in the m_ObjectList vector.
     * Need to seperate them per class for grouping. */
    for (auto & object:m_ObjectList)
    {
        m_PerClassCvRectList[object.classId].emplace_back(object.left,
                object.top, object.width, object.height);
        confidences[object.classId].emplace_back(object.detectionConfidence);
    }

    for (unsigned int c = 0; c < m_NumDetectedClasses; c++)
    {
        /* Cluster together rectangles with similar locations and sizes
         * since these rectangles might represent the same object. Refer
         * to opencv documentation of groupRectangles for more
         * information about the tuning parameters for grouping. */
        cv::HOGDescriptor desc;
        if (m_PerClassDetectionParams[c].groupThreshold > 0)
            desc.groupRectangles(m_PerClassCvRectList[c],
                    confidences[c],
                    m_PerClassDetectionParams[c].groupThreshold,
                    m_PerClassDetectionParams[c].eps);
        totalObjects += m_PerClassCvRectList[c].size();
    }

    output.objects = new NvDsInferObject[totalObjects];
    output.numObjects = 0;

    for (unsigned int c = 0; c < m_NumDetectedClasses; c++)
    {
        /* Add coordinates and class ID and the label of all objects
         * detected in the frame to the frame output. */
        int i = 0;
        for (auto & rect:m_PerClassCvRectList[c])
        {
            NvDsInferObject &object = output.objects[output.numObjects];
            object.left = rect.x;
            object.top = rect.y;
            object.width = rect.width;
            object.height = rect.height;
            object.classIndex = c;
            object.confidence = confidences[c][i++];
            object.label = nullptr;
            if (c < m_Labels.size() && m_Labels[c].size() > 0)
                object.label = strdup(m_Labels[c][0].c_str());
            output.numObjects++;
        }
    }
}

Did the NvDsInferObject change? Because I’m getting: ‘struct NvDsInferObject’ has no member named ‘confidence’. EDIT: Ok, I see that it needs to be added to the object definition. I’m still getting following error:

nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 1]:log(): cuda/cudaScaleLayer.cpp (99) - Cuda Error in execute: 81 (PTX JIT compiler library not found)

Anyone any idea?

Hi,

This issue is related to the CUDA toolkit installation.

Could you share your environment and setup steps with us?
Are you using Jetson platform with sdkmanager installation.

Thanks.

Hi, This was after I did a make and make install of the nvdsinfer lib in /opt/nvidia/deepstream/deepstream-4.0/sources/libs/nvdsinfer/. I noticed that there were symlinks to a certain file (libnvidia-ptxjitcompiler.so.32.1.0) that didn’t exist anymore. I solved it by running:

cp /usr/lib/aarch64-linux-gnu/tegra/libnvidia-ptxjitcompiler.so.32.2.1 /usr/lib/aarch64-linux-gnu/tegra/libnvidia-ptxjitcompiler.so.32.1.0.

Hi,

Thanks for your feedback.
We will try to reproduce in our environment.

Thanks.

Hi,

We try to compile and install the nvdsinfer library and every works well in our side.
Please let us know if the issue occurs again.

Thanks.

If anyone on this thread is trying to get detection probability values from PGIE plugin, please refer to the patch below -

diff --git a/sources/apps/sample_apps/deepstream-test1/deepstream_test1_app.c b/sources/apps/sample_apps/deepstream-test1/deepstream_test1_app.c
index 13d0b72..f6ba273 100644
--- a/sources/apps/sample_apps/deepstream-test1/deepstream_test1_app.c
+++ b/sources/apps/sample_apps/deepstream-test1/deepstream_test1_app.c
@@ -70,6 +70,7 @@ osd_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
         for (l_obj = frame_meta->obj_meta_list; l_obj != NULL;
                 l_obj = l_obj->next) {
             obj_meta = (NvDsObjectMeta *) (l_obj->data);
+            printf("Object class : %d Object probability %f |", obj_meta->class_id, obj_meta->confidence);
             if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE) {
                 vehicle_count++;
                 num_rects++;
@@ -79,6 +80,7 @@ osd_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
                 num_rects++;
             }
         }
+        printf("\n");
         display_meta = nvds_acquire_display_meta_from_pool(batch_meta);
         NvOSD_TextParams *txt_params  = &display_meta->text_params[0];
         display_meta->num_labels = 1;
diff --git a/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt b/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt
index 708b8f1..a27c45b 100644
--- a/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt
+++ b/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt
@@ -70,8 +70,10 @@ num-detected-classes=4
 interval=0
 gie-unique-id=1
 output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
+enable-dbscan=1
 
 [class-attrs-all]
 threshold=0.2
-eps=0.2
-group-threshold=1
+eps=0.7
+minBoxes=3
+#group-threshold=1
diff --git a/sources/gst-plugins/gst-nvinfer/gstnvinfer_meta_utils.cpp b/sources/gst-plugins/gst-nvinfer/gstnvinfer_meta_utils.cpp
index d50c799..f455bb3 100644
--- a/sources/gst-plugins/gst-nvinfer/gstnvinfer_meta_utils.cpp
+++ b/sources/gst-plugins/gst-nvinfer/gstnvinfer_meta_utils.cpp
@@ -80,7 +80,7 @@ attach_metadata_detector (GstNvInfer * nvinfer, GstMiniObject * tensor_out_objec
     obj_meta = nvds_acquire_obj_meta_from_pool (batch_meta);
 
     obj_meta->unique_component_id = nvinfer->unique_id;
-    obj_meta->confidence = 0.0;
+    obj_meta->confidence = obj.detectionConfidence;
 
     /* This is an untracked object. Set tracking_id to -1. */
     obj_meta->object_id = UNTRACKED_OBJECT_ID;
diff --git a/sources/includes/nvdsinfer_context.h b/sources/includes/nvdsinfer_context.h
index d4cd776..57907dd 100644
--- a/sources/includes/nvdsinfer_context.h
+++ b/sources/includes/nvdsinfer_context.h
@@ -369,6 +369,8 @@ typedef struct
     int classIndex;
     /* String label for the detected object. */
     char *label;
+    /* detection confidence of the object */
+    float detectionConfidence;
 } NvDsInferObject;
 
 /**
diff --git a/sources/libs/nvdsinfer/nvdsinfer_context_impl_output_parsing.cpp b/sources/libs/nvdsinfer/nvdsinfer_context_impl_output_parsing.cpp
index d50bddc..f5f9a55 100644
--- a/sources/libs/nvdsinfer/nvdsinfer_context_impl_output_parsing.cpp
+++ b/sources/libs/nvdsinfer/nvdsinfer_context_impl_output_parsing.cpp
@@ -282,6 +282,7 @@ NvDsInferContextImpl::clusterAndFillDetectionOutputDBSCAN(NvDsInferDetectionOutp
             object.label = nullptr;
             if (c < m_Labels.size() && m_Labels[c].size() > 0)
                 object.label = strdup(m_Labels[c][0].c_str());
+            object.detectionConfidence = m_PerClassObjectList[c][i].detectionConfidence;
             output.numObjects++;
         }
     }

Once you have applied the patch above, follow these steps -

$ cd sources/libs/nvdsinfer/
$ sudo CUDA_VER=<add version here> make install
$ cd ../../gst-plugins/gst-nvinfer/
$ sudo CUDA_VER=<add version here> make install

Now the nvinfer plugin libraries have been updated to attach probability metadata, but keep in mind it’s currently available only for DBSCAN clustering algorithm. So this needs to be enabled in the PGIE config and the parameters need to be tuned for your test videos (Please refer to the patch above to see how to enable DBSCAN) The patch also modifies test1 app to show how to do this for the sample stream. To try it out follow these steps -

$ cd sources/apps/sample_apps/deepstream-test1/
$ CUDA_VER=<add version here> make
$ ./deepstream-test1-app ../../../../samples/streams/sample_720p.h264

You should now be able to see all the object probabilities along with the object count being printed on the console.

2 Likes