• Hardware Platform Nvidia Tesla T4
• DeepStream Version 5.1
• TensorRT Version 188.8.131.52
• NVIDIA GPU Driver Version (valid for GPU only) 460.32…03
• Issue Type( questions, new requirements, bugs) question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
use deepstream-app with 15 rtps stream out of which 12 are qith gpu-id=0, and 3 are with gpu-id=1.
I am using deepstream-app with 15 rtsp streams. All good with same gpu-id=0/1
But i want to use 13 streams with gpu-id=0 and left streams with gpu-id=1.
I am getting following error-
src_bin_muxer: Memory Compatibility error: input surface gpu-id does not match with configured gpu-id for element, please allocate input using unified memory or use same gpu-id.
I have checked this also - “How to utilize multiple GPU in deepstream 5.0” but there seems no solution.
Please suggest if there is option i can use. If my priority is not to use multiple config files/multiple deepstream instances.