Getting image from source for RTSP streams in deepstream

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only) 5.0.2
• TensorRT Version 8.4.1
• Issue Type( questions, new requirements, bugs)* Question

I’m reposting this question to get some additional clarification.

Currently I’m getting the frames inside the nvinfer srcpad probe but these frames are resized inside the nvstreammux plugin.
I want to get the original frame directly after the decoding before passing it to nvstreammux.

Can you provide me a code to do this?

So you have got the frames from the srcpad of nvinfer. You can get the frame from the srcpad of decoder or sinkpad of streammux. It’s similar.

@yuweiw It is not similar. In the nvinfer srcpad I’m getting the frame from the batch metadata…
The decoder (nvv4l2decoder) output doesn’t have any metadata, It only has the GstBuffer.

OK. You can just use the Gstbuffer and GstMapInfo to get the raw frame data. Like:

gst_buffer_map (gstBuffer, &gstMapInfo, GST_MAP_READ)

The gstMapInfo.data is the raw frame data.

@yuweiw Thanks… It would be great if you can provide me a sample code snippet to convert the raw frame data into a numpy array.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

As you used nvv4l2decoder, so it’s similar to get the raw data from the nvbuffersurface with nvinfer. You can refer to the source code below to set the batch-id to 0.
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/master/apps/deepstream-imagedata-multistream-redaction/deepstream_imagedata-multistream_redaction.py#L137

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.