Gpu-id=1 but memory is still allocated on GPU0

• Hardware Platform (2x L4)
• DeepStream Version 6.2
• NVIDIA GPU Driver Version (525.125.06)

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.125.06   Driver Version: 525.125.06   CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA L4           Off  | 00000000:4B:00.0 Off |                    0 |
| N/A   65C    P0    52W /  72W |  22451MiB / 23034MiB |     65%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA L4           Off  | 00000000:98:00.0 Off |                    0 |
| N/A   61C    P0    51W /  72W |  22451MiB / 23034MiB |     72%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

• Issue Type (questions)
• How to reproduce the issue:

docker run --runtime=nvidia --env GST_DEBUG=2 --rm -it nvcr.io/nvidia/deepstream:6.2-samples gst-launch-1.0 videotestsrc is-live=true ! nvvideoconvert gpu-id=1 ! m.sink_0 nvstreammux name=m width=1920 height=1080 batch-size=1 gpu-id=1 ! nvtracker gpu-id=1 ll-lib-file="/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so" ! nvv4l2h264enc gpu-id=1 ! fakesink

Hi everyone! We have a dual-GPU setup (2x L4 GPUs) and the goal is to choose the GPU on which to run the pipeline based on our internal logic. In the above pipeline, I set gpu-id=1 for each element, but I see that some memory is still allocated on GPU0:

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A   3139056      C   gst-launch-1.0                    194MiB |
|    1   N/A  N/A   3139056      C   gst-launch-1.0                    262MiB |
+-----------------------------------------------------------------------------+

But if I change DS image version to 6.3 (nvcr.io/nvidia/deepstream:6.3-samples), the issue disappears:

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    1   N/A  N/A   3139263      C   gst-launch-1.0                    280MiB |
+-----------------------------------------------------------------------------+

There were no mentions of this bug being fixed in 6.3 release notes, so the question is whether it was fixed or the bug still there and may happen again under different conditions? Your confirmation is important because moving from DS6.2 to DS6.3 is quite expensive for us as it requires a lot of testing, so we don’t want to migrate if the bug wasn’t fixed.

DS6.3 fixed this gpu-id issue for nvtracker plugin.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.