Deepstreama-app with 2 tesla T4

• Hardware Platform Nvidia Tesla T4
• DeepStream Version 5.1

• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 460.32…03
• Issue Type( questions, new requirements, bugs) question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
use deepstream-app with 15 rtps stream out of which 12 are qith gpu-id=0, and 3 are with gpu-id=1.

I am using deepstream-app with 15 rtsp streams. All good with same gpu-id=0/1

But i want to use 13 streams with gpu-id=0 and left streams with gpu-id=1.

I am getting following error-
src_bin_muxer: Memory Compatibility error: input surface gpu-id does not match with configured gpu-id for element, please allocate input using unified memory or use same gpu-id.
surface-gpu-id=1, source-bin-muxer-gpu-id=0.

I have checked this also - “How to utilize multiple GPU in deepstream 5.0” but there seems no solution.

Please suggest if there is option i can use. If my priority is not to use multiple config files/multiple deepstream instances.


In scenario when input surface memory type NVBUF_MEM_DEFAULT or NVBUF_MEM_CUDA_DEVICE, or NVBUF_MEM_SYSTEM and input surface gpu-id does not match streammux gpu-id, it will report this error. and when input surface memory type does not match streammux memory type, you will meet another issue,
ERROR from src_bin_muxer: memory type configured and i/p buffer mismatch ip_surf 3 muxer 2
make sure you use same memory type for input surface and streammux.
attached is my configuration using two dGPU, one T4, one P4, works on my side.
source30_1080p_dec_infer-resnet_tiled_display_int8.txt (4.7 KB)

Thanks for the support