Distinguishing Object Detector and Classifier Metadata in DeepStream Pipeline

• Hardware Platform (Jetson / GPU) : NVIDIA Jetson AGX Orin
• DeepStream Version : 7.1
• JetPack Version (valid for Jetson only) : 6.1
• TensorRT Version : 8.6.2.3
• Issue Type( questions, new requirements, bugs) : question
Hello,

I have a DeepStream pipeline with two separate models:

  1. Object Detector – Detects objects in the frames.

  2. Classifier – Classifies detected objects.

At the end of the pipeline, I have two probe functions attached to a sink pad to process metadata:

Classifier probe function

batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))

    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        l_obj = frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break

            class_meta_list = obj_meta.classifier_meta_list
            while class_meta_list is not None:
                try:
                    classifier_meta = pyds.NvDsClassifierMeta.cast(class_meta_list.data)
                except StopIteration:
                    break

                label_info_list = classifier_meta.label_info_list
                while label_info_list is not None:
                    try:
                        label_info = pyds.NvDsLabelInfo.cast(label_info_list.data)
                    except StopIteration:
                        break


                    # Process classifier data
                    # ...........

                    try:
                        label_info_list = label_info_list.next
                    except StopIteration:
                        break
                try:
                    class_meta_list = class_meta_list.next
                except StopIteration:
                    break
            try:
                l_obj = l_obj.next
            except StopIteration:
                break

        frame_meta.bInferDone = True
        try:
            l_frame = l_frame.next
        except StopIteration:
            break

Object detector probe function

batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))

    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        l_obj = frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break

            # Process detector data
            # ...........
            class_id = obj_meta.class_id
            confidence = obj_meta.confidence
            bbox_left = obj_meta.rect_params.left
            bbox_top = obj_meta.rect_params.top
            bbox_width = obj_meta.rect_params.width
            bbox_height = obj_meta.rect_params.height
            logger.info(
                f"Bounding Box Coordinates: Left: {bbox_left}, Top: {bbox_top}, Width: {bbox_width}, Height: {bbox_height}"
            )

            try:
                l_obj = l_obj.next
            except StopIteration:
                break

        frame_meta.bInferDone = True
        try:
            l_frame = l_frame.next
        except StopIteration:
            break

For example in segmentation tasks, we can distinguish metadata using base_meta.meta_type == pyds.NVDSINFER_SEGMENTATION_META, and the unique_id helps differentiate models. However, in my case, both the detector and classifier use NvDsMetaType.NVDS_OBJ_META.

Question:

How can I reliably distinguish whether the incoming metadata from frame_meta.obj_meta_list originates from the object detector or the classifier? Unlike segmentation, where the unique_id is available, I don’t see an equivalent identifier for detection/classification.

Is there a recommended approach to track which model generated the metadata?

Could you attach your whole pipeline and post the position of the probe function? There are some plugins, like nvstreamtiler, nvstreamdemux, will destroy the original batches.

@yuweiw thank you for your response!

Here is a diagram of my pipeline:

I currently have two probe functions attached to the fakesink, which is at the very end of the pipeline. While I could resolve my issue by directly attaching each probe function to the corresponding nvinfer elements, I would prefer to have a single probe function at the fakesink.
This probe function should process NvDsFrameMeta from both:

• The object detector (unique-id=4, lower branch of the pipeline)

• The classifier (unique-id=1)

The segmentation models (unique-id=2 and unique-id=3) are straightforward to handle, so they are not a concern.

Question:

How can I distinguish between classifier data and object detection data within a single probe function when processing NvDsObjectMeta?

Both probe functions for the object detector and classifier are included in the main post. I appreciate any guidance on how to differentiate them efficiently!

You can refer to our sample code deepstream_lpr_app.c.

@yuweiw Thank you for the reply an example. So the solution is to use unique_component_id proprety of NvDsObjectMeta. Once you set different gie-unique-id for each of your models inside configuration file, you can distinguish between them.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.