Please provide complete information as applicable to your setup.
• RTX3090 x2
• DeepStream 6.2
• TensorRT 8.5.2.2
• NVIDIA GPU Driver Version 525.116.04
• Issue Type( questions, new requirements, bugs)
**• How to reproduce the issue ?
We used 40 separate pipelines and used YOLOV7 for inference. We specified that the decoding and inference stages use the same GPU. In order to make reasonable use of GPU resources, we conducted a rotation allocation method. However, there was a phenomenon where one of the 3090s could not be used on the two 3090s,以下是我们的代码片段、pipline结构图和gpu利用率图。
Next is the error message
0:01:13.194558119 82178 0x7f9850007aa0 WARN nvinfer gstnvinfer.cpp:1480:gst_nvinfer_process_full_frame: error: Memory Compatibility Error:Input surface gpu-id doesnt match with configured gpu-id for element, please allocate input using unified memory, or use same gpu-ids OR, if same gpu-ids are used ensure appropriate Cuda memories are used
0:01:13.194586292 82178 0x7f9850007aa0 WARN nvinfer gstnvinfer.cpp:1480:gst_nvinfer_process_full_frame: error: surface-gpu-id=0,primary-nvinference-engine-gpu-id=1
Has feed ? YES
0:01:13.274386701 82178 0x7f9850007aa0 WARN nvinfer gstnvinfer.cpp:1480:gst_nvinfer_process_full_frame: error: Memory Compatibility Error:Input surface gpu-id doesnt match with configured gpu-id for element, please allocate input using unified memory, or use same gpu-ids OR, if same gpu-ids are used ensure appropriate Cuda memories are used
0:01:13.274409634 82178 0x7f9850007aa0 WARN nvinfer gstnvinfer.cpp:1480:gst_nvinfer_process_full_frame: error: surface-gpu-id=0,primary-nvinference-engine-gpu-id=1
**