Multiple GPUs, same pipeline

• Hardware Platform (Jetson / GPU) NVIDIA GeForce RTX 3090
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.4.0
• NVIDIA GPU Driver Version (valid for GPU only) 535.113.01

Hello,

I am using a deepstream pipeline in Python. My pipeline consists of detector (pgie) → tracker → classifier1 (sgie) → classifier2 (sgie2).

I run that pipeline using nvinferserver for all GIEs.

I am trying to run the PGIE on GPU_0 and the SGIEs on GPU_1. But I am getting this error:

 Error: gst-resource-error-quark: Memory Compatibility Error:Input surface gpu-id doesn't match with configured gpu-id for element, please allocate input using unified memory, or use same gpu-ids OR, if same gpu-ids are used ensure appropriate Cuda memories are used (1): gstnvinferserver.cpp(637): gst_nvinfer_server_submit_input_buffer (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference:

I took a look on forums and read on some other ticket that;

Blockquote
If you want some elements to work in GPU0 while some other elements work in GPU1 in the same pipeline, the elements who allocates buffers should use “nvbuf-mem-cuda-unified” memory type to make sure the buffer memory can be access by multiple GPUs. E.G. nvvideoconvert, nvstreammux,…

I am not really sure which elements allocate buffers. I am using “nvbuf-mem-cuda-unified” for some elements, but not all of them. I have this in my pipeline code:

if not is_aarch64():
     mem_type = int(pyds.NVBUF_MEM_CUDA_UNIFIED)
     self.nvvidconv1.set_property("nvbuf-memory-type", mem_type)
     self.nvvidconv2.set_property("nvbuf-memory-type", mem_type)
     self.tiler.set_property("nvbuf-memory-type", mem_type)

And these are the elements I have in my pipeline:

    pipeline.add(streammux)
    pipeline.add(pgie)
    pipeline.add(tracker)
    pipeline.add(sgie)
    pipeline.add(nvvidconv1)
    pipeline.add(capsfilter1)
    pipeline.add(sgie2)
    pipeline.add(tiler)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(nvvidconv2)
    pipeline.add(fakesink)

Of course, I edit config.pbtxt (triton) and deepstream config files to make it work on different GPUs.

What should be done to make me able to change the GPU of any model?

Thanks.

it is related to memory type and gpuid. how do you set gpuid?
nvinverserver is opensource in DS6.3. please refer to the code logic in gst_nvinfer_server_submit_input_buffer of \opt\nvidia\deepstream\deepstream-6.3\sources\gst-plugins\gst-nvinferserver\gstnvinferserver.cpp.

I set GPU_ID inside configs.

But yes, you are right it is related to memory type. I wonder which elements I should set the memory type to nvbuf-memory-type?

the error is from pgie. what values did you set strteammux and pgie ’ memtype and gpuid?

I didn’t set nvbuf-memory-type for pgie and streammux. I don’t know if pgie has the attribute/property of nvbuf-memory-type.

About the gpuid, they are all running on GPU 0 since I have only 1 GPU.

Thanks for the sharing! could you modify deepstream-test3.py to reproduce this issue? or could you provide a simplified code to reproduce this issue? including cfg files, Thanks!

Okay, I think I resolved this issue by setting a property of nvvidconv1 to nvbuf-mem-cuda-unified and moved it before the gies.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.