Add custom metadata (Python dictionary) to NvDsObjectMeta objects

I am running a custom secondary model that needs to attach a Python dictionary to an object metadata NvDsObjectMeta. What is the suggested way to do this? Is there any workaround? I thought about a few solutions but they don’t seem to be right.

  • I was thinking about using the obj_meta.obj_user_meta_list but this does not seem like a good idea as secondary models will output their output tensor there. Any other place? Even if I could use the user_meta_list how can I allocate an object NvDsUserMeta so that the memory is owned by C++? There’s no method pyds.alloc_* for this.
  • I also looked for a way to create custom objects to attach to msg meta but there is no complete documentation on how to do it and I don’t know C++ very well How to use pyds with custom Object classes?

Any other idea?

As a workaround I also tried to store some of the data using the classifier metadata, but this gives me a 6677 Segmentation fault (core dumped).
For instance:

        classifier_meta = pyds.NvDsClassifierMeta()
        label_info_list = pyds.NvDsLabelInfo()
        label_info_list.result_label = "is_blue"
        label_info_list.result_prob = 0.8
        pyds.nvds_add_label_info_meta_to_classifier(classifier_meta, label_info_list) # <----- Segmentation fault
        pyds.nvds_add_classifier_meta_to_object(obj_meta, classifier_meta)

You could open a python terminal and run this code, it will give you the error right away.
The only example I found is this one deepstream_python_apps/apps/deepstream-ssd-parser at 20c6b13671e81cf73ca98fa795f84cab7dd6fc67 · NVIDIA-AI-IOT/deepstream_python_apps · GitHub which however is not useful as it shows a primary model (object detection) while I am trying to run a classifier.

I also understand the allocating objets using pyds.NvDsClassifierMeta() and pyds.NvDsLabelInfo()might be wrong because C++ needs to be control the memory ownership and not Python. However, there doesn’t seem to exist a pyds.alloc_* method for these two classes. Even if I wanted to use NvDsUserMeta I would have the same issue. There’s no function to allocate the its memory.

I tried the following:

import ctypes
import pyds

metadata_str = "hey"

buffer = pyds.alloc_char_buffer(len(metadata_str))
pa = ctypes.cast(buffer, ctypes.POINTER(ctypes.c_char * len(metadata_str)))
pa.contents.value = str.encode(metadata_str)
print(pyds.get_string(buffer))

payload = pyds.alloc_nvds_payload()
payload.payload = buffer

But I get the error:

  File "<ipython-input-2-07297b367dd4>", line 12, in <module>
    payload.payload = buffer
TypeError: (): incompatible function arguments. The following argument types are supported:
    1. (self: pyds.NvDsPayload, arg0: capsule) -> None

How do I convert a string to a capsule without writing C++ code? And what is this capsule object?

Can you refer below?

deepstream_python_apps/deepstream_test_4.py at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

The example does not help. It does not show custom objects.

Please refer to gst_element_send_nvevent_new_stream_reset — Deepstream Deepstream Version: 6.1.1 documentation

Isn’t it just for segmentation data? I’d like to store a custom dictionary object.

Please open the link: nvds_acquire_obj_meta_from_pool — Deepstream Deepstream Version: 6.0 GA documentation (nvidia.com)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.