Deepstream - synchronize rtsp output and message brokers

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson
• DeepStream Version
5.1
• JetPack Version (valid for Jetson only)
4.5.1
• TensorRT Version
7.1.3
• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)
question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi, I have an python deepstream application with rtsp sink which is pushing out iot data through kafka.
Sink is then converted to HLS stream and displayed in the web browser
My question is, how could I get the information about actual frameid or timestamp from the rtsp video stream tp sychnronize msg broker data with the video output? Can I insert some metadata like frameid into the stream?

Current msg is just a sample of how to generate json message from NVmeta. If you want different items and format in your message, you can modify the code to change it. The gst-nvmsgconv and nvmsgconv are open source

The question is more about how to insert metadata into the videostream than to the kafka messages.

There is already source_id and timestamp in NvMeta. NVIDIA DeepStream SDK API Reference: Main Page

I am not sure if I was asking correctly. I am familiar with nvmeta, in fact I am using them to send my custom event to the kafka with the correct frameid and other data. There is no any problem on this side.

What I want to achieve is to output deepstream pipeline to the rtsp and show it in the website. This is also known, I have no problem in doing so.

The problem comes when I want to show kafka messages with extracted metadata together with the rtsp output. rtsp output has some delay, could be 1second or even 10 seconds. Kafka messages do not have any delay. And there comes the problem. Messages comes before the video does. So, when I receive the kafka message, I need to store it in some list. During reading the rtsp output, I need to somehow get the frameid from the video itself ( could be id3 tag or something else, I am not familiar with this part) to be able to link the kafka message to the
exact frame.

Does deepstream has some mechanism to sync those two streams? Video and kafka messages? Is it possible to somehow insert nvmeta into the videostream in a way that it could be extracted for example with another gstreamer pipeline ( something like adding subtitles to the stream )?

As you have known, the video is transferred through RTSP protocol while the kafka message is transferred through kafka protocol, there is no relationship between these two protocols. Once the video is packaged as RTSP payload, it just follows the RTSP protocol, there is no any customized information with the video, and it has nothing to do with deepstream any more. That is why we draw the inference result in video inside the deepstream pipeline but not after sink. Seems you need your own customized protocol to combine extra information with RTSP stream if you want to connect kafka protocol and rtsp protocol together.

ok, thanks