Use gpu-id=1 in deepstream-app

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU Tesla T4
• DeepStream Version Deepstream SDK 5.0

Hi,

I am trying to use the second GPU on a server for running an instance of deepstream-app pipeline. I am using the attached config files, however I keep getting the following error:

ERROR from src_bin_muxer: Memory Compatibility Error:Input surface gpu-id doesnt match with configured gpu-id for element, please allocate input using unified memory, or use same gpu-ids OR, if same gpu-ids are used ensure appropriate Cuda memories are used
Debug info: gstnvstreammux.c(1224): copy_data_cuda (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstNvStreamMux:src_bin_muxer:
surface-gpu-id=0,src_bin_muxer-gpu-id=1
0:00:04.824427829  7547 0x5596b455d2d0 WARN                 nvinfer gstnvinfer.cpp:1240:convert_batch_and_push_to_input_thread:<primary_gie> error: NvBufSurfTransform failed with error -1 while converting buffer
ERROR from primary_gie: NvBufSurfTransform failed with error -1 while converting buffer
Debug info: gstnvinfer.cpp(1240): convert_batch_and_push_to_input_thread (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
Quitting
App run failed

I have tried changing cudadec-memtype=2 and nvbuf-memory-type=3 under [streammux] with no luck. I have tried setting nvbuf-memory-type=3 under all plugins as well but didn’t work. Based on the error it seems like surface-gpu-id=0 no matter what I set in the configuration file, so how do I change this value?

Would you please let me know what is the problem? (I have to ensure that the machine indeed has
config_infer_primary_yoloV4.txt (2.9 KB)
d_gpu1.txt (3.8 KB)
2 GPUs).

I tried the same code on another machine with two GPU and it works fine. Do you know what can cause this problem? it seems like there is a specific problem with the machine, not the deepstream configuration.

Quick update, I realized that I am getting this error only if the source file is a small video file (a few seconds) but I don’t get this error when it is a larger video file, and it happens on both servers. Any suggestions on how I can fix this?

Hi @MGh

use the second GPU on a server for running an instance of deepstream-app pipeline
After you deepstream-app pipeline works on GPU#0, you can use “export CUDA_VISIABLE_DEVICES=1” or “CUDA_VISIABLE_DEVICES=1 deepstream-app …” to run your application on the 2nd GPU, by this, you don’t need to modify anything in your app or config

Hi @mchi

Thanks a lot for your response. I tried what you suggested however even though I set CUDA_VISIABLE_DEVICES=1 when I check with nvidia-smi still GPU 0 is being used to capacity and GPU 1 is idle.

There are so many GPU users who can use “CUDA_VISIBLE_DEVICES” without any issue, I believe it can work well.
You can search on Ethernet about the usage to find some clues about why it failed on your side.

I see, sure I will do that. Just one question, does that make sense that it doesn’t work for short video files but it does work for larger input video files?

this is not expected

Sorry!
Note, it’s CUDA_VISIBLE_DEVICES not CUDA_VISIABLE_DEVICES .

@mchi Thank you, this works!

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.