How to get "delayed" metadata from secondary models when classifier-async-mode=1

Hello,

I have a DS 6.0.1 pipeline running a pgie, tracker and a secondary model. I am using the Python API.

My secondary model works well when classifier-async-mode=0 but if I set it to classifier-async-mode=1 I have two issues:

  • Some objects do not have metadata (despite being bigger than input-object-min-width and input-object-min-height and having set interval=0)
  • Some objects have metadata that belongs to other object previously seen (e.g. in Frame 1 I had an object with attribute MYATTRIBUTE=VALUE1, and in Frame 2 (after Frame 1) I had two objects with MYATTRIBUTE=VALUE1 despite the correct value being clearly MYATTRIBUTE=VALUE2). I am sure the classifier is accurate as this does not happen when classifier-async-mode=0

Based on this, I am assuming I am not parsing the metadata correctly when classifier-async-mode=1.
I looked at the documentation and found :

So it seems that when classifier-async-mode=1, the classifier

Attaches metadata after the inference results are available to next Gst Buffer in its internal queue

However, I am not sure what this internal queue would be. I couldn’t find any other mention of it so I am not sure how and where this metadata is attached.

Furthermore, I don’t understand how I could possibly get metadata from past objects. I looked at the documentation and I couldn’t find any attribute of NvDsClassifierMeta or NvDsLabelInfo that could suggest me that some metadata would refer to another object. Given an obj_meta of type NvDsObjectMeta, what I am doing is the following:

        l_class = obj_meta.classifier_meta_list

        class_report_metadata_list = list()

        while l_class is not None:

            try:
                class_meta = pyds.NvDsClassifierMeta.cast(l_class.data)

            except StopIteration:
                break

            label_report_metadata_list = parse_class_meta(class_meta)
            class_report_metadata_list.extend(
                label_report_metadata_list
            )

            try:
                l_class = l_class.next
            except StopIteration:
                break

        object_report_metadata.metadata_attributes = class_report_metadata_list

Where parse_class_meta is defined as:

    @classmethod
    def parse_class_meta(cls, class_meta: pyds.NvDsClassifierMeta):

        label_report_metadata_list = list()

        l_label = class_meta.label_info_list
        while l_label is not None:
            try:
                label_info = pyds.NvDsLabelInfo.cast(l_label.data)
            except StopIteration:
                break


            label_report_metadata = dict(
                label=label_info.result_label
                confidence=label_info.result_prob
            )

            label_report_metadata_list.append(
                label_report_metadata
            )

            try:
                l_label = l_label.next
            except StopIteration:
                break

        return label_report_metadata_list

My questions are:

  1. Where are the metadata attached?
  2. Should I monitor past tracking metadata (as here deepstream_python_apps/deepstream_test_2.py at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub ) or this does not have anything to do with that?
  3. How can I know that the metadata for a given object has not been produced yet? And how can I wait for it? I have to report the results of my pipeline to the cloud. I’d like to wait for the metadata before uploading the results.

EDIT: based on this topic [Secondary GIE] Custom Classifier in sgie outputs only random entry in label.txt - #30 by rohitnairkp , I changed the setting secondary-reinfer-interval=0. It helps but there are still quite a few images wrongly classified (this does not happen when classifier-async-mode=0) and a lot of detections have no classifier metadata even when setting input-object-min-width=0 and input-object-min-height=0

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

Hello @yingliu , I am using the official container of Deepstream 6.0.1 on a Tesla T4.

  1. As the doc said, sgie’s classifier-async-mode need to work with tracker, old classification meta will be attach to object meta by object’s trackid.
  2. No, it includes tracker meta, not includes classification meta.
  3. please refer to Gst-nvmsgbroker, Gst-nvmsgbroker — DeepStream 6.1.1 Release documentation, please refer to sample deepstream-test5, you can set sink’s ouput to 6 MsgConvBroker.

Hello @fanzh , thank you for your reply.

I don’t need to use gst-nvmsbroker because I am reporting the metadata using a custom probe.
However, I am not sure how to wait for the metadata to be processed. When you say

old classification meta will be attach to object meta by object’s trackid
do you mean that the metadata will be attached once the object is detected again in a subsequent frame, or will the metadata be attached to the first detection of the object?

E.g. let’s consider two subsequent frames Frame1 (captured at time t1) and Frame2 (captured at time t2). The same car is detected in both frames.
Will the metadata be attached to the detection in the Frame2 or also in Frame1? If they will be attached to Frame1, how can I know that the secondary model is currently running so that I know that I should wait for the metadata?

sgie’s classifier-async-mode need to work with tracker, let take this pipeline for example,
pgie(car detecor) + tracker + sgie(car color classification) + osd( draw).
at time t1, sgie pushed inference task T1 to thread, and push object meta to osd without waiting inference end, at time t2, if T1 end, sgie will attach classification meta to object meta by trackerid, two objects are the same if have the same trackerid, if T1 did not end, there is still no classification meta.
you can run sample deepstream-test2 to verify.

Hey @fanzh , thank you for your explanation. I still have a doubt: when you “sgie will attach classification meta to object meta by trackerid” do you mean that it will attach the metadata to very same NvDsObjectMeta detected at time T1 or will it be another NvDsObjectMeta with the same tracker id of the original?

will it be another NvDsObjectMeta with the same tracker id. After T1 inference end, the classifcaiton output will be saved. which will add to new object meta with the same tracker id.
nvinfer plugin is opensource, you can add logs in gst_nvinfer_process_objects to verify.

Thank you. So it seems that:

  • At time T1 an object is detected and the detection will be saved in an instance of NvDsObjectMeta that we’ll call nv_ds_object_meta_t1. The secondary model starts.
  • The secondary model keeps computing until the inference is complete.
  • At time T2 the same object is detected again. A new instance of type NvDsObjectMeta is created. We’ll call this instance nv_ds_object_meta_t2. The output from the secondary model is attached to nv_ds_object_meta_t2 because it has the same tracker id that nv_ds_object_meta_t1 had.

Is this correct?

At this point my last question is: will the output of the model ever be attached to nv_ds_object_meta_t1 or will it be attached only to nv_ds_object_meta_t2?

  1. yes.
  2. if classifier-async-mode=1, the sgie output will not be attached to nv_ds_object_meta_t1, and will be attached to nv_ds_object_meta_t2.

Thank you @fanzh . This basically means that all the objects detected only once will never have attributes from secondary models. Is this correct?

yes, maybe detecting once has no trackerid.

Sorry, I am not sure I understood correctly. If an object is detected only once, can it have attributes from secondary models?

what dose “an object is detected only once” mean? an object only exist in one frame? in this case, this object did not come again, will not have attributes from secondary models.

Yes, I meant when an object appears only in one frame. This is likely to happen when processing many video streams, therefore processing them at a low FPS.
Thank you, this is what I needed to know.

If I may, it would be great if, in future releases, metadata from secondary models would be attached even to the first detection even when async-mode for secondary model is enabled. I understand there could be delays. But one could keep a reference to the object metadata and wait for the secondary model metadata to be attached. Maybe you could introduce a callback to signal when metadata are produced.

Thank you!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.