How to extract frames from a video using Deepstream?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : Jetson Nano/ RTX 2060 SUPER
• DeepStream Version : 5.1
• JetPack Version (valid for Jetson only) : 4.5.1 (latest to this date)
• TensorRT Version : I guess it’s the one which comes along with the jetpack - 7.x
• NVIDIA GPU Driver Version (valid for GPU only) : For Jetson Nano - Is there a version? | For dGPU - 460
• Issue Type( questions, new requirements, bugs) : So I want to learn/create a deepstream model which can take a video source in the input and provide frames of that source at the sink. No inference. Just taking video and giving out it’s relevant frames. Later I want to put this frames over a website. I would like to know if there is a guide or material to follow or if someone can mentor the flow of the project as I understand the ecosystem (as a beginner) but have never created an application before.

Thanks.

Deepstream is for inferencing purpose. If you don’t need inferencing. Please refer to https://docs.nvidia.com/jetson/archives/l4t-archived/l4t-3261/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide/multimedia.html# for Jetson platform.

For RTX 2060, you may consider FFmpeg | NVIDIA Developer

Hi thanks for your reply I would like to know if you can share some code samples as well.
In Deepstream-5.1 I have made some changes to the python app samples deepstream-image-metadata folder and will share the code here ds-imagedata/deepstream_imagedata-multistream.py at main · rajeshroy402/ds-imagedata · GitHub
But this is only extracting frames from the one where object is detected and not others. Can you share your insights on this code?

Thanks

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.