Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) H100 • DeepStream Version 6.4 • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi, I would like to know are there any samples to reference for running inference using multiGPU. For example, I have 24 CCTVs, 8 GPUs for my server and I want to assign 3 CCTVs running for each of the GPU in one application. I have tried changing the config files, under the property, there is a gpu-id parameter, but the documentation stated only integer is accepted. Besides, I know that i can start 8 applications for each GPUs, but is there a way to assign running at multiGPU in one application
Yes, i want each GPU to handle some portion of CCTVs only
Its RTSP camera
I have 3 models running in my pipeline now, i am able to run my application for 30 CCTVs for one single GPU. And if I add another 30 CCTVs, I want it to use the second GPU
So the GPU can be used for video decoding with the RTSP sources and the model inferencing(including preprocessing and postprocessing).
Every GPU has the video decoding and model inferencing capabilities. But one video can only be decoded in one GPU while one model can only run on one GPU in one process.
You need to decide which video should run in which GPU, and which model should run on which GPU. Then you can configure the corresponding element correctly.
So meaning that I can only run 3 models at one GPU only, or I can choose 3 GPUs to run each model and which GPU to decode the RTSP stream. Please correct me if i am wrong