Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 440.33.01
I’m going to migrate our application based on Deepstream 5.0 to a server having multiple GPU cards, trying to make sure all GPU cards will be utilized. Yet I didn’t find out how to set it up, is there any interface to configure it? It would be nice to specify which one, how percentage or which ones to be used.
Various elements have “gpu-id” properties on them. You can set them directly in your language of choice or use the DeepStream app’s .ini and/or nvinfer .ini files. It’s possibly best to have each pipeline/process on its own GPU.
Thanks @mdegans. Could you please elaborate a little more?
I used setting based on dstest_imagedata_config.txt sample. The gpu-id is 0. Let’s say we have 4 GPU cards: 0, 1, 2 and 3, and 32 RTSP streams. How to set up to use the 4 GPU cards, all together or respectively per pipeline?
'## 0=Group Rectangles, 1=DBSCAN, 2=NMS, 3 = None(No clustering)
Respectively per pipeline. So you’ll have 4 config files, one for each instance of deepstream_app (or whatever). So with your example you’d have
rtsp sources 0-7 here
rtsp sources 8-15 here
… and so forth.
This is sort of awkward. Not only we need a bunch of config files, but also we need to manually specify which streams on which GPU. Is there a way to let the system balance the streams? i.e, dynamically load balance the streams to less active GPUs.
Hi @bridge sorry to revive the old topic but would you mind sharing your final approach to this?
I’m looking to achieve something similar - run multiple rtsp streams on multiple GPUs with multiple Deepstream app instances (to accommodate different versions of the model) and keeping track of all those minor differences in the configs is getting a bit cumbersome and I’m not sure what’s the best way to handle this elegantly.