Nvmsgconv config.txt file

• Hardware Platform (Jetson / GPU): Jetson
• DeepStream Version: 6.2
• JetPack Version (valid for Jetson only): 5.1.1-b56
• TensorRT Version: 8.5.2.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs): question

Right now I’m getting metadata information from the NvOSD element using a probe function. I want to attach NvMsgConv & NvMsgBroker element to send metadata to Kafka. But I don’t quite understand how to setup this msgconv_config.txt file for the NvMsgConv element.I referred to some of the sample apps given but they are not very clear.
I want to get metadata in JSON format similar to this,

name: truck
score: -0.100000
Bounding box :{
x_min: 759.944763
y_min: 379.165680
x_max: 1017.969543
y_max: 583.095764 }
frame: 145

how would I go on and create the msgconv_config.txt file. thanks

please refer to doc and sample opt\nvidia\deepstream\deepstream-6.4\sources\apps\sample_apps\deepstream-test4 which is a demo to send broker. In probe function osd_sink_pad_buffer_metadata_probe, the app uses nvds_add_user_meta_to_frame to add usermeta to frame_meta. this usermeta includes information such as width and height. nvmsgcov and nvmsgbroker are opensource. In nvmsgcov, the plugin will convert information to Json string. please refer to generate_object_object in \opt\nvidia\deepstream\deepstream-6.4\sources\libs\nvmsgconv\deepstream_schema\eventmsg_payload.cpp.

Right now I just want to get the default metadata so I didn’t add any custom metadata to buffer.
And I kept the msgconv_config.txt as empty. Because I couldn’t understand how to setup that one.

Below is how I set the properties for the nvmsgconv & nvmsgbroker element.
I settup up the correct proto.so file for the RabbitMQ which is libnvds_amqp_proto.so fo the proto-lib.
Here although I gave the location to dump the payload in “debug-payload-dir” it didn’t dump any file there, Need to know why that happened.

 /* setting properties of msgconv & msgbroker elements */
  g_object_set(nvmsgconv, "config", MSCONV_CONFIG_FILE, "payload-type", 0, "debug-payload-dir", "/opt/nvidia/deepstream/deepstream-6.4/fast-api/test/", NULL);
  g_object_set(nvmsgbroker, "proto-lib", proto_lib, "sync", FALSE, NULL);

  if (cfg_file != NULL) {
        g_object_set(nvmsgbroker, "config", cfg_file, NULL);
  }

I’m currently using RabbitMQ as the message broker and these are what contains inside the cfg_amqp.txt file

[message-broker]
hostname = localhost
username = guest
password = guest
port = 5672
exchange = amq.topic
topic = testTopic
amqp-framesize = 131072

My pipeline works like this
pgie → tracker → nvvideoconvert → nvosd. (neglect whats before the pgie)
after NvOSD I set a tee that will have a queue1 → nvmsgconv → nvmsgbroker
and the other queue2 → sink.

What I have here is a fast api server which runs the pipeline and it calls the endpoint after successfully inferencing. As shown below it doesn’t give any errors on the terminal. But no metadata is passed on to the rabbitmq server.

(pipeline:371): GStreamer-WARNING **: 12:12:46.684: External plugin loader failed. This most likely means that the plugin loader helper binary was not found or could not be run. You might need to set the GST_PLUGIN_SCANNER environment variable if your setup is unusual. This should normally not be required though.

(pipeline:371): GStreamer-WARNING **: 12:12:47.228: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.1: cannot open shared object file: No such file or directory

(pipeline:371): GLib-CRITICAL **: 12:12:47.423: g_strrstr: assertion 'haystack != NULL' failed
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:08.919612985   371 0x555ce6aa9060 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.4/fast-api/inference_base/dino/dino_model_v1.onnx_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT inputs          3x544x960       
1   OUTPUT kFLOAT pred_logits     900x91          
2   OUTPUT kFLOAT pred_boxes      900x4           

0:00:09.019918957   371 0x555ce6aa9060 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.4/fast-api/inference_base/dino/dino_model_v1.onnx_b1_gpu0_fp32.engine
0:00:09.025125431   371 0x555ce6aa9060 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:./configs/model_config.txt sucessfully
Now playing: (null)
Deepstream Pipeline is Running now...
New file created: file:///opt/nvidia/deepstream/deepstream-6.4/fast-api/tmp/car.mp4
Calling Start 0 
creating uridecodebin for [file:///opt/nvidia/deepstream/deepstream-6.4/fast-api/tmp/car.mp4]

(pipeline:371): GStreamer-CRITICAL **: 12:13:15.721: gst_mini_object_copy: assertion 'mini_object != NULL' failed

(pipeline:371): GStreamer-CRITICAL **: 12:13:15.721: gst_mini_object_unref: assertion 'mini_object != NULL' failed

(pipeline:371): GStreamer-CRITICAL **: 12:13:15.721: gst_caps_get_structure: assertion 'GST_IS_CAPS (caps)' failed

(pipeline:371): GStreamer-CRITICAL **: 12:13:15.721: gst_structure_set_value: assertion 'structure != NULL' failed

(pipeline:371): GStreamer-CRITICAL **: 12:13:15.721: gst_mini_object_unref: assertion 'mini_object != NULL' failed
decodebin child added source
decodebin child added decodebin0
STATE CHANGE ASYNC

decodebin child added qtdemux0
decodebin child added multiqueue0
decodebin child added h264parse0
decodebin child added capsfilter0
decodebin child added nvv4l2decoder0
decodebin new pad video/x-raw
Decodebin linked to pipeline
mimetype is video/x-raw
nvstreammux: Successfully handled EOS for source_id=0
INFO:     127.0.0.1:51064 - "POST /videoDetect HTTP/1.1" 200 OK

Need to know why I don’t get metadata dumped even though I mention the location in debug-payload-dir

duplicate with this topic nvmsgconv-element-doesnt-dump-metadata.