Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU Tesla T4
• DeepStream Version Deepstream SDK 5.0
I am trying to use the second GPU on a server for running an instance of deepstream-app pipeline. I am using the attached config files, however I keep getting the following error:
ERROR from src_bin_muxer: Memory Compatibility Error:Input surface gpu-id doesnt match with configured gpu-id for element, please allocate input using unified memory, or use same gpu-ids OR, if same gpu-ids are used ensure appropriate Cuda memories are used Debug info: gstnvstreammux.c(1224): copy_data_cuda (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstNvStreamMux:src_bin_muxer: surface-gpu-id=0,src_bin_muxer-gpu-id=1 0:00:04.824427829 7547 0x5596b455d2d0 WARN nvinfer gstnvinfer.cpp:1240:convert_batch_and_push_to_input_thread:<primary_gie> error: NvBufSurfTransform failed with error -1 while converting buffer ERROR from primary_gie: NvBufSurfTransform failed with error -1 while converting buffer Debug info: gstnvinfer.cpp(1240): convert_batch_and_push_to_input_thread (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie Quitting App run failed
I have tried changing cudadec-memtype=2 and nvbuf-memory-type=3 under [streammux] with no luck. I have tried setting nvbuf-memory-type=3 under all plugins as well but didn’t work. Based on the error it seems like surface-gpu-id=0 no matter what I set in the configuration file, so how do I change this value?