How to Dynamically Add MP4 Videos to DeepStream Pipeline and Store Processed Videos Separately?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Tesla T4
• DeepStream Version: 6.4
• JetPack Version (valid for Jetson only)
• TensorRT Version: 8.6.1.6-1+cuda12.0
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs): question

Hi,

How can I dynamically input MP4 videos from a specific directory into a DeepStream pipeline without reinitializing the pipeline? Additionally, I am looking for a solution to store all the processed videos separately with unique names. Any insights or examples on achieving this functionality would be highly valuable.

Thanks

No. There is no way. GStreamer requires you start new pipeline when the source changed.

Thank you for replying to my question. I think I may not have stated my question clearly at the beginning. I modified RUNTIME SOURCE ADDITION DELETION REFERENCE APP USING DEEPSTREAMSDK 6.4 to dynamically add MP4 videos to the pipeline without restarting it. However, I’ve encountered an issue when attempting to create a unique sink for each video, aiming to store the processed videos separately. I would like to seek assistance in understanding if it is feasible to achieve this.

If your video segments is contiguous, you should have a look at gstreamer “concat” element. This element can give you smooth video concatenating like a live source

Thanks for the reply. but I need to configure this as an API endpoint. therefore the video segments won’t be contiguous.

Can you explain this requirement? DeepStream works with batched videos after nvstreammux plugin, you can’t get separated sink for the designated video before you use nvstreamudemx to separate the batch videos. What do you mean by “create a unique sink for each video”?

Hi,

My requirement is to create an API endpoint that takes an MP4 video file path in a POST request. The code will then download the video from a GCP bucket. Next, the video will be processed using the DeepStream pipeline, and each processed video will be stored in a separate directory [these processed videos needs to be uploaded to a GCP bucket. that’s why I need a separate sink for each processed video ]. The endpoint will also be set up to provide the prediction results.

To avoid extra delays in API requests, I’m considering using the RUNTIME SOURCE ADDITION DELETION REFERENCE APP USING DEEPSTREAMSDK 6.4 code to avoid the additional delay caused by model reinitialization for each request.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

The sample deepstream_reference_apps/runtime_source_add_delete/deepstream_test_rt_src_add_del.c at master · NVIDIA-AI-IOT/deepstream_reference_apps (github.com) is based on multiple streams input and needs there is always at least one video working in the pipeline. I don’t understand whether you can guarantee at least one video(stream) is in PLAYING state in the pipeline? If you only need to process videos one by one, the sample will not help you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.