GstNvStreamMux:src_bin_muxer: surface-gpu-id=0,src_bin_muxer-gpu-id=1

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.5
• NVIDIA GPU Driver Version (valid for GPU only) 550.142
• Issue Type( questions, new requirements, bugs)

When using nvmultiurisrcbin plugin, set gpu_id=1,cudadec-memtype=2, inference engine gpuid is also 1, but the program is not the same gpu error

This problem is related to the version and has been solved in the new version, If you want the pipeline to run on a specific GPU, you can use this environment variable. such as

export CUDA_VISIBLE_DEVICES=3

I see it’s in the documentation

But I found another articleNvMultiurisrcbin doesn’t set gpu-id


is this also a solution, I may need to use different Gpus for different applications in a docker container, can I set export CUDA_VISIBLE_DEVICES=x separately

You can set CUDA_VISIBLE_DEVICES for each application independently, for example CUDA_VISIBLE_DEVICES=2 ./your_application

Does the gpu_id in the configuration file of my nvinfer plugin also match the gpu_id set by CUDA_VISIBLE_DEVICES=2

No need to configure gpu-id