How to integrate customized a deepstream app with VST/SDR and Metropolis Analytics Microservices?

We plan to integrate our custom deepstream app into the AI NVR, to leverage the video source addition/removal capabilities of VST/SDR and to integrate our own developed analytics module. However, we have found that neither the deepstream app nor deepstream-test5 in the current deepstream 6.4 SDK supports gem overlay and SDR.

Regarding the customized AI NVR workflow, we would like to ask you the following questions:
1. Will the deepstream-moj-app be open-sourced in the future, or will there be tutorials to help developers integrate the custom deepstream app with MMJ components?
2. If there is no such plan, when we want to integrate the custom deepstream app with the AI NVR workflow, would you recommend expanding nvds_rest_server to support gem overlay and SDR, or writing our microservices for gem overlay/SDR, or other suggestions?

I will check internally for open source deepstream-moj-app.

Can you share the details of customization? Some of the customization can be done by modify config of deepstream-moj-app, e.g., Replacing Deepstream peoplenet model with Yolo model - Intelligent Video Analytics / Metropolis Microservices for Jetson - NVIDIA Developer Forums

Since we used a customer-provided model in the secondary GIE, which will produce custom output tensors, we had to define a custom metadata schema to pack those output results and then send them to the specific microservices through the msgbroker(Kafka) plugin. This also prevented us from reusing the deepstream-moj-app by modifying config files.

There isn’t plan to open source currently. Is it possible customized those in /opt/nvidia/deepstream/deepstream-6.4/sources/gst-plugins/gst-nvmsgconv/gstnvmsgconv.cpp?

It’s so pity but thanks for your kind reminder.
We are trying to parse the NvDsInferTensorMeta and access the tensor output data in nvds_msg2p_generate_new(). Is it the suggested way to parse the specific output tensor meta?

Yes, you are right.

1 Like

We modified the value of the ‘msg-conv-msg2p-newapi’ field to ‘1’ in the config file within the sink group and message converter group. However, we found that ‘nvds_msg2p_generate_new()’ was not being called from the console output instead of nvds_msg2p_generate(). Besides modifying the config file and replacing the custom ‘libnvds_msgconv.so’, are there any other items that need to be changed?

Another question is, if we modify the value of ‘text_params.display_text’ in the object_meta in ‘nvds_msg2p_generate_new()’, will this change be applied to the Gst-nvdsosd element? Because we want to display texts converted from custom tensor meta within bounding boxes on the OSD.

Can you add some log in /opt/nvidia/deepstream/deepstream-6.4/sources/gst-plugins/gst-nvmsgconv/gstnvmsgconv.cpp to check the issue?
Yes, it will display on the OSD.