Memory Compatibility Error:Input surface gpu-id doesn't match with configured gpu-id for element

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) dgpu RXT3070
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version tensort 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only) NVIDIA-SMI 525.116.04
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

We’re trying to run multiple pipelines, each using a different GPU… meaning all NVIDIA plugins that support gpu-id properties for a specific pipeline are set to the same Id. When we run the pipeline(s), we get the follow error messages.

nvinfer gstnvinfer.cpp:1480:gst_nvinfer_process_full_frame:<infer-primarygie-1-nvinfer>e[00m error: Memory Compatibility Error:Input surface gpu-id doesnt match with configured gpu-id for element, please allocate input using unified memory, or use same gpu-ids OR, if same gpu-ids are used ensure appropriate Cuda memories are used
nvinfer gstnvinfer.cpp:1480:gst_nvinfer_process_full_frame:<infer-primarygie-1-nvinfer>e[00m error: surface-gpu-id=0,infer-primarygie-1-nvinfer-gpu-id=1

We’ve tried setting the buffer memory to CUDA Unified, but did not help.

Someone reported the same error sometime ago here Memory Compatibility Error:Input surface gpu-id doesnt match with configured gpu-id for element but it was left unresolved.

Any suggestions? I’ actually reporting this for a user of my work here. I’m trying to get them to dump the pipeline graph to a .dot file for viewing and get a complete log. Anything else that might help?

Thanks,
Robert.

I think this is a problem about configure item of gpu-id.

Elements of gstreamer on pipeline must run on same GPU.

This means that such decoder/nvstreammux/osd/infer, need to use the same gpu-id configuration

Because memory cannot be shared on different GPUs

Thanks

do you have same examples about using gud-id 1,not 0. Many thanks!

In deepstream-test1-app set property like below.

g_object_set(G_OBJECT (streammux), "gpu-id", 1, NULL);
g_object_set(G_OBJECT (decoder), "gpu-id", 1, NULL);
g_object_set(G_OBJECT (pgie), "gpu-id", 1, NULL);
g_object_set(G_OBJECT (nvvidconv), "gpu-id", 1, NULL);
g_object_set(G_OBJECT (nvosd), "gpu-id", 1, NULL);

If you need more helps, please open a new topic. Thanks

thanks . i have another question about nvidia. when i run deepstreram -c source30_1080p_dec_infer-resnet_tiled_display_int8.txt , the Average frame rate can reach 27 on rtx2060 ,while on rtx3070 the Average frame rate is 25 . The results did not meet expectations because I believe 3070 is definitely better than 2060… why?

Please open a new topic for performance differences

@junshengy thanks for the quick response. We’re setting the gpu-id for each component exactly as you have here. I was hoping that a .dot dump to .png file would help, but it appears that the NVIDIA plugins do not report gpu-id… that is unfortunate.

From the nvinfer plug error below, you can see that we are setting the gpu-id=1

error: surface-gpu-id=0,infer-primarygie-1-nvinfer-gpu-id=1

Where is surface-id defined?

Do you hava multil-GPUs? this response for 446073615.
if you only have one gpu, set value of gpu-id to 0.

Thanks.

As mentioned, I’m reporting this for one of my users… they came to me asking how to run on multiple GPU’s so I’m assuming they do… I’m waiting on confirmation.

thanks again.

i add this code and run make -j8. Then run the deepstream-test1-app

But the error still appears :

@junshengy my users comments above. Yes, multiple GPUs… able to reproduce the problem using deepstream-test1-app as shown above.

Is the error message below not telling us something?

error: surface-gpu-id=0,infer-primarygie-1-nvinfer-gpu-id=1

What does surface-gpu-id refer to and where is it set/determined.?

Thanks,
Robert.

surface-gpu-id just set by decoder,represents the source of the inference buffer.

How do you run deepstream-test1-app ? The patch can run successfully on my gpu server.

Can you try add

g_object_set(G_OBJECT (sink), "gpu-id", 1, NULL);

Since my server is headles, I used fakesink instead of nveglglessink.


it still doesnot work.

this is the error logs:

I don’t know what happened.

About deepstream-test1-app

1.If you don’t modify anything, the app can run successfully ?

2.If you use the command export CUDA_VISIBLE_DEVICES=0/1 to run on specified gpu
It’s can run successfully ?

3.what’s the output after execute deepstream-app --version-all ?

you can try reinstall driver.

There is no update from you for a period,
assuming this is not an issue anymore.
Hence we are closing this topic.
If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.