I’m trying to use the two GPUs of my server (2xA30) to balance the workload of my pipeline, which has two primary models (PGIE) in the same pipeline and supports several sources and uses the NEW NVSTREAMMUX.
I have come up with two different options:
Select a different gpu-id for each PGIE configuration. However some configuration problem appears and the model on GPU 1 does not start, this error appears in the log:
0:02:03.874800967 1 0x7f5cc40114c0 WARN nvinfer gstnvinfer.cpp:1480:gst_nvinfer_process_full_frame:<primary-inference-9> error: Memory Compatibility Error:Input surface gpu-id doesnt match with configured gpu-id for element, please allocate input using unified memory, or use same gpu-ids OR, if same gpu-ids are used ensure appropriate Cuda memories are used
note: I have configured all the pipeline elements to use CUDA unified memory
Divide the pipeline in 2 each with a PGIE model. However, the NEW NVSTREAMMUX does not have a gpu-id parameter to select where to place this component.
In addition, I prefer the first of these two options, as it gives more flexibility to my solution, as I have it programmed.
We came across a different issue involving the transfer of PGIE detections from an RGB stream to an infrared stream. To address this challenge, we designed a custom element that effectively resolved the problem. While I’m uncertain if there’s an alternative solution to your issue that doesn’t involve crafting a custom element, I am confident that a custom element modeled after our solution could effectively resolve your concern.
The custom element, named metatransfer, is a GStreamer element based on GstAggregator. It operates by extracting all DeepStream metadata from a GStreamer buffer received through its metapad and transferring it to the buffers received through its buffpad. The resultant output buffer comprises the data from buffpad along with the metadata from metapad.
Below is a diagram illustrating how this element would solve your issue:
The 1st option, could you show all your pipeline to us? Or could you reproduce that with our demo? If we can reproduce that problem in our environment, we can analyze that quickly.
The 2nd option, the new nvstreammux will not proess the images, so there is no need to set gpu id.
Regarding the second option, I would like to know in which way I can modify an example code (deepstream_test_3.py) with the NEW NVSTREAMUX instead of the old one and have it used entirely on GPU 1? I understand that the new nvstreammux does not process the image, but the decoder does, and somehow the GPU on which it runs must be defined, am I wrong?
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
Can you simplify your demo code or directly use our demo deepstream-test3 to analyze this problem?
1st option:You should set the gpu-id and unified memory type to every plugin with relevant parameters.
2nd option:Just set the env variable: export CUDA_VISIBLE_DEVICES=<gpu-id>