Question regarding parallel processing using deepstream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.4
• TensorRT Version8.6.1.6
• NVIDIA GPU Driver Version (valid for GPU only) 535.171.04
Issue Type( questions, new requirements, bugs) Question

I’m working on optimizing our DeepStream pipeline to process long videos more quickly using multiple GPUs. Our current setup is as follows:

  • The current pipeline is: uridecodebin -> streammuxer -> pgie -> tracker -> osd -> fakesink
  • This pipeline currently runs on a single GPU.

Our goal is to reduce the processing time by a factor of 4 by utilizing 4 GPUs multi gpu machines. We’re specifically looking at ways to distribute the processing of a single long video across these 4 GPUs.

Currently I am considering chunking the video and distributing chunks to different GPUs.

What are the best practices or recommended approaches for efficiently distributing the processing of a single long video across multiple GPUs in DeepStream? Are there any existing DeepStream plugins that could help with this task?

Any insights, examples, or suggestions would be appreciated!

No. There is no such plugin or component.

The video can only be chunked from the first frame to the last frame if it is a content based compressed video file(E.G. H264 or H265 encoded). There is no decoder can start to decode at any frame in the middle of the video unless the video is encoded in some special way. The frame can only be decoded one by one in time order, there is no way to decode in parallel. The only way you can do is to split the video to several video clips and inferencing on the clips, these are two separated steps. I don’t know whether the clips splitting can help you to save time.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.