Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU, RTX 3090
• DeepStream Version 6.2 (omw to switching to 6.3)
• NVIDIA GPU Driver Version (valid for GPU only) 520.61.05
• Issue Type( questions, new requirements, bugs) Question
My DS app is based on testapp3, with inspiration from testapp4 regarding how to generate message payloads. I am currently using the nvmsgconv and nvmsgbroker libraries to send data to kafka from my deepstream pipeline, using the following config:
msgconv:
payload-type: 0
msg2p-newapi: 0
debug-payload-dir: /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/debug_dir
I have adjusted the nvmsgconv-library and added custom handling for my message payloads. It works but as I add more models to my pipeline, I would like to explore whether it is possible to add meta data from deepstream buffers more automatically somehow. As far as I’ve understood, this could be possible by using payload-type 2 NVDS_PAYLOAD_DEEPSTREAM_PROTOBUF.
However, setting payload-type to 2 gives me weird message payloads (put in the degug-directory), fe:
4.00�������H" CAMERA_ID
I suspect that this is because the message conversion library is expecting different data than what it is given by my application.
So my question is: where and how do I define what to include in the message payloads for the PROTOBUF payload type? Ideally, I would like for it to be for example all of the data in the “classifier meta list” for each object. I can see that the message conversion library is using the “schema.proto”-file and filling up different objects in that, but I don’t understand how it works and in what way it can be “mapped” to the metadata I want to include in the messages.
A follow-up question is: is there any reference application where the PROTOBUF message payload type is used, which I can look to for inspiration?