Send object info via mqtt via python bindings

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Jetson Orin AGX 64Gbyte
Deepstream 6.4
Jetpack 6.0 DP

It appears I cannot just simply transmit object information such as confidence or class_index without cramming these somehow into the premade classes (like people or cars) - unless I want to rewrite & rebuild eventmsg_payload.cpp?

Is there some generic object class where I can just attach a custom json blob with my object data - all I want is to transmit 6 numbers per object, i.e. bounding box, confidence & class Id?

please refer to osd_sink_pad_buffer_metadata_probe of /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test4/deepstream_test4_app.c. you can add information to NvDsEventMsgMeta, which can support bbox, objectId, confidence and objClassId.

I am looking particularly at the pythom example - but same code really, i.e. :

specifically lines 202-209 assing some values:

 msg_meta = pyds.alloc_nvds_event_msg_meta(user_event_meta) =
msg_meta.bbox.left = obj_meta.rect_params.left
msg_meta.bbox.width = obj_meta.rect_params.width
msg_meta.bbox.height = obj_meta.rect_params.height
msg_meta.frameId = frame_number
msg_meta.trackingId = long_to_uint64(obj_meta.object_id)
msg_meta.confidence = obj_meta.confidence

but unless we then call

msg_meta = generate_event_msg_meta(msg_meta, obj_meta.class_id)

and add a specific object class , like car or person, these values will not be sent to MQTT at all.

This becomes apparent, when looking at eventmsg_payload.cpp in /opt/nvidia/deepstream/deepstream-6.4/sources/libs/nvmsgconv/deepstream_schema.cpp, where values such as msg_meta.confidence only get assigned, when the overall msg object is assigned one of a very specific set of cases . Such as “Face”, “Person” etc.

A workaround is to just put all the information into a string and just attach it as e.g. “haircolor” for object type “person”, but long term this seems like a really hacky & bad solution.

Can you provide some step-by-step instructions on how to transmit just the generic detection properties - such as trackingID, box coordinates, confidence and class index?

From the example given, it is not very clear how to do that.

nvmsgcov is opensource. you can modify generate_object_object of nvmsgconv\deepstream_schema\dsmeta_payload.cpp to add classid.
about haircolor, you can add pyds.NvDsObjectType.NVDS_OBJECT_TYPE_PERSON, which has more details of person.

Yes, I understand that the library can be rebuilt, but I am curious, if it is possible to just send this basic information without having to rebuild the library and reinstall the python-bindings ?

classid is not added in nvmsgconv low-level code. you need to modify the low-level code to customize. about haircolor, you don’t need to modify the C code. you can use pyds.NvDsObjectType.NVDS_OBJECT_TYPE_PERSON type and add person detail information in python application code.