How to use unified memory with the new nvstreammux

• Hardware Platform: GPU
• DeepStream Version: 6.1.1
• TensorRT Version: 8.4.1.5
• NVIDIA GPU Driver Version: 515.65.01
• Issue Type: Question, possibly Bug
• How to reproduce the issue? See below

When trying to use unified memory to utilize multiple GPU:s in a DeepStream pipeline it seems like it is not possible to get the new nvstreammux to send unified memory buffers downstream. The same scenario works flawlessly with the “old” nvstreammux. How can we configure the new nvstreammux to use unified memory?

Test case / repro

Change gpu_id to 1 in /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt

Using the “old” nvstreammux with unified memory initially allocated for GPU 0 and nvinfer on GPU 1

USE_NEW_NVSTREAMMUX=no gst-launch-1.0 uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4 ! \
nvvideoconvert gpu-id=0 nvbuf-memory-type=3 ! mux.sink_0  nvstreammux name=mux width=1920 height=1080 batch-size=1 ! \
nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt ! fakesink

This runs without any errors

The same scenario with the new nvstreammux

USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4 ! \
nvvideoconvert gpu-id=0 nvbuf-memory-type=3 ! mux.sink_0  nvstreammux name=mux ! \
nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt ! fakesink

This results in the warnings below followed by an error:

WARN nvinfer gstnvinfer.cpp:1430:gst_nvinfer_process_full_frame:<nvinfer0> error:
Memory Compatibility Error:
Input surface gpu-id doesnt match with configured gpu-id for element,
please allocate input using unified memory, or use same gpu-ids OR, if same gpu-ids are used ensure appropriate Cuda memories are used
WARN nvinfer gstnvinfer.cpp:1430:gst_nvinfer_process_full_frame:<nvinfer0> error: surface-gpu-id=0,nvinfer0-gpu-id=1

We have also tested with adding extra debug priotouts to gstnvinfer.cpp to show the type of memory of the NvBufSurface. The result is NVBUF_CUDA_MEM_UNIFIED with the old nvstreammux, but NVBUF_MEM_DEFAULT with the new nvstreammux.

Is it possible to change the configuration to make this work with the new nvstreammux or is this a bug?

We are checking this issue. Will be back after there is any progress.

1 Like

I could not reproduce the error with DeepStream 6.1.1.

How much GPUs are there in your device? Can you run “nvidia-smi” to check?

Thanks for testing! There are two GPU:s on the device. The output of nvidia-smi is:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.65.01    Driver Version: 515.65.01    CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA A100-PCI...  Off  | 00000000:37:00.0 Off |                    0 |
| N/A   38C    P0    40W / 250W |      0MiB / 40960MiB |      0%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA A100-PCI...  Off  | 00000000:86:00.0 Off |                    0 |
| N/A   36C    P0    36W / 250W |      0MiB / 40960MiB |      3%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+

I have made the same test again and still get the error described in my initial post. I see that there is a message with “Try to open CUVID /dev/nvidia0: get -1” in the output from your test that is not present in our output. Could it be that GPU id 0 was not available in your test system and that explains the diffence?! And did you change the nvinfer config file to use GPU id 1?

I can reproduce the failure now. We are investigating the problem. And will get back to you when there is progress.

1 Like

@Fiona.Chen do you have any news regarding this issue?

The bug is fixed internally. It will be available with the next release.

That’s very good news! I’m marking this as solved then and will test again when the next version of DeepStream is released.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.