Multiple GPUs appear and GPUs are not used

Please provide complete information as applicable to your setup.

• RTX3090 x2
• DeepStream 6.2
• TensorRT 8.5.2.2
• NVIDIA GPU Driver Version 525.116.04
• Issue Type( questions, new requirements, bugs)
**• How to reproduce the issue ?
We used 40 separate pipelines and used YOLOV7 for inference. We specified that the decoding and inference stages use the same GPU. In order to make reasonable use of GPU resources, we conducted a rotation allocation method. However, there was a phenomenon where one of the 3090s could not be used on the two 3090s,以下是我们的代码片段、pipline结构图和gpu利用率图。





Next is the error message
0:01:13.194558119 82178 0x7f9850007aa0 WARN nvinfer gstnvinfer.cpp:1480:gst_nvinfer_process_full_frame: error: Memory Compatibility Error:Input surface gpu-id doesnt match with configured gpu-id for element, please allocate input using unified memory, or use same gpu-ids OR, if same gpu-ids are used ensure appropriate Cuda memories are used
0:01:13.194586292 82178 0x7f9850007aa0 WARN nvinfer gstnvinfer.cpp:1480:gst_nvinfer_process_full_frame: error: surface-gpu-id=0,primary-nvinference-engine-gpu-id=1
Has feed ? YES
0:01:13.274386701 82178 0x7f9850007aa0 WARN nvinfer gstnvinfer.cpp:1480:gst_nvinfer_process_full_frame: error: Memory Compatibility Error:Input surface gpu-id doesnt match with configured gpu-id for element, please allocate input using unified memory, or use same gpu-ids OR, if same gpu-ids are used ensure appropriate Cuda memories are used
0:01:13.274409634 82178 0x7f9850007aa0 WARN nvinfer gstnvinfer.cpp:1480:gst_nvinfer_process_full_frame: error: surface-gpu-id=0,primary-nvinference-engine-gpu-id=1
**

This phenomenon does not occur on four T4 graphics cards and should be unrelated to the code. Is it related to the graphics card?

Is there no one answering???

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

There are two ways:

  1. Assign every DeepStream element in the same pipeline to the same GPU. E.G. nvv4l2decoder, nvstreammux, nvinfer, nvstreamdemux, nvvideoconvert,…
  2. If you want some elements to work in GPU0 while some other elements work in GPU1 in the same pipeline, the elements who allocates buffers should use “nvbuf-mem-cuda-unified” memory type to make sure the buffer memory can be access by multiple GPUs. E.G. nvvideoconvert, nvstreammux,…
    Gst-nvstreammux — DeepStream 6.2 Release documentation

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.