Capture the frames of RTSPs with deep stream python apps?

Hi all,
If I want to use RTSP multi-stream sample code in the deep stream python apps to feed my own processing unit,
Is it possible to capture the frames of multi-streams? How?
How to get frames of streams in the code?

• Hardware Platform (Jetson / GPU) : Jetson nano
• DeepStream Version : 5.0 DP
• JetPack Version (valid for Jetson only) : 4.4 DP
• TensorRT Version : 7.1

Please check the image data access app for an example of getting decoded frames as numpy array in a probe function.

To confirm your desired pipeline:
You want multiple streams in, grab the decoded frames for your own processing, and then send the output over RTSP? Does the output need to be individual streams or composited into one tiled frame?

1 Like

Now I don’t need to send the output over RTSP. Now I want this show me muti-window like cv2.imshow as at the same time. but my first goal is that to capture the frames as numpy arrays at the same time.

Then the image data access app is the right place to start. We are investigation cv2.imshow behavior, but you can now at least save the frames to file as demonstrated in the sample app.