Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I want to mange thousands of video streams with deepstreams on kubernetes. My dGPU is on the cloud, and there is a metropolis helm chart to show how to feed the streams to dGPUs :
But the config it hard code the matching of dGPU and stream，that is to say, before i deploy my deepstream app on kubernetes, i must determine which streams feed to which dGPU node and fill in the values.yaml. If i want to update these matching, i have to recofig the values.yaml and helm upgrade install the chart:
What’s your criterion to dynamically manage orchestrate video streams ? Based on GPU loading?
If a video stream is already connected to a DeepStream instance, for dynamically management, will you disconnect it and re-connect it to another deepstream instance in another docker?
Are you seeking this total solution from NV or just some suggestion?
Thanks for your reply. I want get some suggestion form here. It’s better if there have some total solution from NV.
I wish the Deepstream instances to act like a pool of computing resources for multi video streams inference, the system determine which streams feed to which Deepstream instance. For dynamically management, if some GPUs meet some problems or offline, I hope there is a toolkit to help me orchestrate these video streams to other online GPUs.
I’m looking for some solution instead of maintaining a configuration manually like this: