Clarification on Handling Multiple Streams and Pipelines in DeepStream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson AGX Xavier Series
• DeepStream Version 6.2

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I am currently working with the NVIDIA DeepStream SDK and have encountered a challenge related to handling multiple video streams and pipelines dynamically.

Problem Description:

I need to process multiple video streams in a way that allows for each stream to have its own dedicated pipeline. Specifically, I want to create and link multiple pipelines dynamically based on the number of streams/resources I have. However, I am facing difficulties and am unsure if DeepStream supports this kind of setup or if there’s an alternative approach I should consider.

Details:

  • Current Setup: I have a single nvdsanalytics element that needs to be shared across multiple pipelines, where each pipeline handles a different stream.
  • Objective: I aim to create a separate pipeline for each stream, allowing each pipeline to operate independently and have its own output sink.
  • Challenge: Creating and linking pipelines dynamically seems complex, and I am unsure if there’s support for looping through multiple pipelines or if there’s a more straightforward approach to achieve this.

Questions:

  1. Is there a recommended approach to handle multiple pipelines dynamically in DeepStream? Can you provide guidance on setting up multiple pipelines to operate independently?
  2. Are there any limitations or best practices for managing multiple nvdsanalytics instances and linking them to separate pipelines?
  3. If dynamic pipeline creation is not directly supported, are there alternative methods or patterns that could be used to achieve the desired outcome?

I would appreciate any insights or suggestions you can provide to help resolve this issue.

Hi,

We encountered the same issue when developing complex DeepStream applications, particularly when dealing with multiple inputs and outputs of different types (snapshots, RTSP, WebRTC, recordings, etc.). What worked best for us was to create pipelines with a single function and link them dynamically in the application. We achieved this using an architecture based on GstD and GstInterpipes, both of which are open-source projects that we develop and maintain.

We presented a sample application at the 2020 GTC that you can check out: NVIDIA GTC 2020: How to build a multi-camera Media Server for AI processing on Jetson.

If you need help setting up something specific, let me know, and I’d be happy to assist.

Will the multiple pipelines use different models for inferencing?

What does your “pipeline” mean? A typical DeepStream pipeline is based on “batch” but not stream.