How to extract frames from a video using Deepstream?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : Jetson Nano/ RTX 2060 SUPER
• DeepStream Version : 5.1
• JetPack Version (valid for Jetson only) : 4.5.1 (latest to this date)
• TensorRT Version : I guess it’s the one which comes along with the jetpack - 7.x
• NVIDIA GPU Driver Version (valid for GPU only) : For Jetson Nano - Is there a version? | For dGPU - 460
• Issue Type( questions, new requirements, bugs) : So I want to learn/create a deepstream model which can take a video source in the input and provide frames of that source at the sink. No inference. Just taking video and giving out it’s relevant frames. Later I want to put this frames over a website. I would like to know if there is a guide or material to follow or if someone can mentor the flow of the project as I understand the ecosystem (as a beginner) but have never created an application before.

Thanks.

Deepstream is for inferencing purpose. If you don’t need inferencing. Please refer to https://docs.nvidia.com/jetson/archives/l4t-archived/l4t-3261/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide/multimedia.html# for Jetson platform.

For RTX 2060, you may consider FFmpeg | NVIDIA Developer