Accessing the input frames to Deepstream at later stage in pipeline

Please provide complete information as applicable to your setup.

• Hardware Platform (Tesla T4)
• DeepStream Version (5.1)
• JetPack Version (valid for Jetson only)
• TensorRT Version (7.2.2)
• NVIDIA GPU Driver Version (valid for GPU only)(455)
• Issue Type( questions)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

We are using deepstream-test5 reference app for doing video analytics by sending the meta data through Kafka message broker. We are attempting to access the input frame from the frame meta data through nvgst buffer.
Deepstream–> Kafka–> Spark is the flow of our architecture. We want to save the frame whenever there is any message which demands drawing of bbox on frame. We wish to send back the bbox information from spark to deepstream using bidirectional messaging and draw the bbox on the frame saved in deepstream.

Frame from Deepstream–> detection meta–>Kafka—> Spark
|
Save corresponding frame ← event-meta<-----|

When we need to save the frame its timestamp would be needed to access it from the nvgst buffer.

1 Like

Why you want to send the bbox information to spark and send back from spark to deepstream to draw the bbox? why don’t you draw the bbox directly? after send and send back the message, there frames processed, the current frame may not be the frame you want to draw on. you may need extra implementation to control the synchromization.

Hi Amycao,

We need to send all BBOX info as meta data to SPARK.
We are generating final BBox or at spark level. As the BBox is not just based on detections but on certain analytics done on spark. We call the output of the SPARK analytics as an incident for example vehicle in no parking zone.
This incident BBOX needs to be sent back to the Deepstream stage for sake of drawing.

We agree we will need extra handling for synchronization. We expect the timestamp of the required frame to be available and we should be able to access it from the buffer stored in Deepstream.

If we needed the BBox to be drawn just based on detections then we could have done what you are suggesting.

Regards,
Sankalp

With the pipeline you show, after nvmsgbrker, the batches and frames are consumed. There is no mechanism to save the frames inside the pipeline. You may do the things outside pipeline. E.G. Save the frames by pad probe, and draw bbox on the saved frame data with other interfaces. I

Hi Fiona Chen,

We feel there is a facility in deepstream to buffer the frames. We want to access specific frames on which an incident occurs. To understand what is an incident in our context please read the earlier post. Once the time stamp for the incident frame is sent back to deepstream we will extract that frame and draw BBox on it.

Regards,
Sankalp

There is frame buffering before the frames reach to sink, but after sink, buffers are not available in the whole pipeline.

So if you are sure the timestamp sent to deepstream is after the frames reach to sink, you can add object meta which contains bbox information to the frame with pad probe. There is already lots of samples of pad probe and get object meta in deepstream sample apps.
/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/

Please make sure you are familiar with gstreamer (https://gstreamer.freedesktop.org/) before you start with deepstream.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.