How to manage thousands of video streams and feed to deepstream?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I want to mange thousands of video streams with deepstreams on kubernetes. My dGPU is on the cloud, and there is a metropolis helm chart to show how to feed the streams to dGPUs :

But the config it hard code the matching of dGPU and stream,that is to say, before i deploy my deepstream app on kubernetes, i must determine which streams feed to which dGPU node and fill in the values.yaml. If i want to update these matching, i have to recofig the values.yaml and helm upgrade install the chart:

example-helm-charts/values.yaml at master · NVIDIA-METROPOLIS/example-helm-charts (github.com)

With the release of deepstream-6.1, i found there is a new sdk named video storage toolkit(VST), but it was EARLY ACCESS

So, is there a better way to dynamically manage orchestrate video streams to deepstream nodes with Kubernetes?

Hello @dailiupup This forum is more foucus on Deepstream program itself, we are not familair with k8s orchestartion either, I’m afraid you have to get some help from K8s or other forums.

What’s your criterion to dynamically manage orchestrate video streams ? Based on GPU loading?
If a video stream is already connected to a DeepStream instance, for dynamically management, will you disconnect it and re-connect it to another deepstream instance in another docker?
Are you seeking this total solution from NV or just some suggestion?

Thanks for your reply. I want get some suggestion form here. It’s better if there have some total solution from NV.

I wish the Deepstream instances to act like a pool of computing resources for multi video streams inference, the system determine which streams feed to which Deepstream instance. For dynamically management, if some GPUs meet some problems or offline, I hope there is a toolkit to help me orchestrate these video streams to other online GPUs.

I’m looking for some solution instead of maintaining a configuration manually like this:

  nodes:
  - name: "customer1" # kubectl get nodes
    localhostpath: <path to my streams source dir>
    width: <width>
    height: <height>
    batch_size: <batch_size>
    no_streams: <batch_size>
    gpus:
    - id: 0
      streams:
      - stream: <images1>
      - stream: <images2>
      - stream: <images1>

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.