Missing NVDEC on multiple GPU system

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): NVIDIA A16
• DeepStream Version: 6.1.1
• JetPack Version (valid for Jetson only) N/A
• TensorRT Version N/A
• NVIDIA GPU Driver Version (valid for GPU only) : 515.65.01
• Issue Type( questions, new requirements, bugs) bug??
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

We have a machine with 8x NVIDIA A16 GPU. Each NVIDIA A16 GPU present itself as 4 GPU to the operating system. So, the output command for the ‘nvidia-smi’ command present 32 GPUs on the machine.

The test command works if we select a GPU id between 0 and 15.

gst-launch-1.0 -m rtspsrc location=rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mp4 ! rtph264depay ! nvv4l2decoder ! debugspy ! fakesink

If we select a GPU greater than 15 we get the error bellow:

/opt/nvidia/deepstream/deepstream-6.1# gst-launch-1.0 -m rtspsrc location=rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mp4 ! rtph264depay ! nvv4l2decoder ! debugspy ! fakesink

(gst-plugin-scanner:27): GStreamer-WARNING **: 19:48:32.636: Failed to load plugin ‘/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so’: librivermax.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:27): GStreamer-WARNING **: 19:48:32.649: Failed to load plugin ‘/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstchromaprint.so’: libavcodec.so.58: cannot open shared object file: No such file or directory
Setting pipeline to PAUSED …
ERROR: Pipeline doesn’t want to pause.
Got message #10 from element “fakesink0” (state-changed): GstMessageStateChanged, old-state=(GstState)GST_STATE_NULL, new-state=(GstState)GST_STATE_READY, pending-state=(GstState)GST_STATE_VOID_PENDING;
Got message #11 from element “debugspy0” (state-changed): GstMessageStateChanged, old-state=(GstState)GST_STATE_NULL, new-state=(GstState)GST_STATE_READY, pending-state=(GstState)GST_STATE_VOID_PENDING;
Got message #12 from element “nvv4l2decoder0” (error): GstMessageError, gerror=(GError)NULL, debug=(string)“v4l2_calls.c(651):\ gst_v4l2_open\ ():\ /GstPipeline:pipeline0/nvv4l2decoder:nvv4l2decoder0:\012system\ error:\ No\ such\ file\ or\ directory”;
ERROR: from element /GstPipeline:pipeline0/nvv4l2decoder:nvv4l2decoder0: Could not open device ‘/dev/nvidia0’ for reading and writing.
Additional debug info:
v4l2_calls.c(651): gst_v4l2_open (): /GstPipeline:pipeline0/nvv4l2decoder:nvv4l2decoder0:
system error: No such file or directory
Setting pipeline to NULL …
Freeing pipeline …

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

So, we have this unusual behavior, only the GPUs from 0 to 15 have the NVENC. The GPUs from 16 to 31 present the error above.

Regards,

Additional Info:

nvidia-container-cli -V
cli-version: 1.10.0
lib-version: 1.10.0
build date: 2022-06-13T10:40+00:00
build revision: 395fd41701117121f1fd04ada01e1d7e006a37ae
build compiler: gcc 8.5.0 20210514 (Red Hat 8.5.0-13)
build platform: x86_64
build flags: -D_GNU_SOURCE -D_FORTIFY_SOURCE=2 -DNDEBUG -std=gnu11 -O2 -g -fdata-sections -ffunction-sections -fplan9-extensions -fstack-protector -fno-strict-aliasing -fvisibility=hidden -Wall -Wextra -Wcast-align -Wpointer-arith -Wmissing-prototypes -Wnonnull -Wwrite-strings -Wlogical-op -Wformat=2 -Wmissing-format-attribute -Winit-self -Wshadow -Wstrict-prototypes -Wunreachable-code -Wconversion -Wsign-conversion -Wno-unknown-warning-option -Wno-format-extra-args -Wno-gnu-alignof-expression -I/usr/include/tirpc -Wl,-zrelro -Wl,-znow -Wl,-zdefs -Wl,–gc-sections

nvidia-smi
Mon Dec 12 14:56:08 2022
±----------------------------------------------------------------------------+
| NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A16 Off | 00000000:05:00.0 Off | 0 |
| 0% 37C P8 15W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 1 NVIDIA A16 Off | 00000000:06:00.0 Off | 0 |
| 0% 38C P8 15W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 2 NVIDIA A16 Off | 00000000:07:00.0 Off | 0 |
| 0% 31C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 3 NVIDIA A16 Off | 00000000:08:00.0 Off | 0 |
| 0% 29C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 4 NVIDIA A16 Off | 00000000:29:00.0 Off | 0 |
| 0% 38C P8 15W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 5 NVIDIA A16 Off | 00000000:2A:00.0 Off | 0 |
| 0% 39C P8 15W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 6 NVIDIA A16 Off | 00000000:2B:00.0 Off | 0 |
| 0% 32C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 7 NVIDIA A16 Off | 00000000:2C:00.0 Off | 0 |
| 0% 31C P8 15W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 8 NVIDIA A16 Off | 00000000:45:00.0 Off | 0 |
| 0% 34C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 9 NVIDIA A16 Off | 00000000:46:00.0 Off | 0 |
| 0% 34C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 10 NVIDIA A16 Off | 00000000:47:00.0 Off | 0 |
| 0% 29C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 11 NVIDIA A16 Off | 00000000:48:00.0 Off | 0 |
| 0% 27C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 12 NVIDIA A16 Off | 00000000:65:00.0 Off | 0 |
| 0% 33C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 13 NVIDIA A16 Off | 00000000:66:00.0 Off | 0 |
| 0% 34C P8 15W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 14 NVIDIA A16 Off | 00000000:67:00.0 Off | 0 |
| 0% 27C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 15 NVIDIA A16 Off | 00000000:68:00.0 Off | 0 |
| 0% 25C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 16 NVIDIA A16 Off | 00000000:85:00.0 Off | 0 |
| 0% 32C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 17 NVIDIA A16 Off | 00000000:86:00.0 Off | 0 |
| 0% 33C P8 15W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 18 NVIDIA A16 Off | 00000000:87:00.0 Off | 0 |
| 0% 26C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 19 NVIDIA A16 Off | 00000000:88:00.0 Off | 0 |
| 0% 24C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 20 NVIDIA A16 Off | 00000000:A5:00.0 Off | 0 |
| 0% 31C P8 15W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 21 NVIDIA A16 Off | 00000000:A6:00.0 Off | 0 |
| 0% 33C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 22 NVIDIA A16 Off | 00000000:A7:00.0 Off | 0 |
| 0% 26C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 23 NVIDIA A16 Off | 00000000:A8:00.0 Off | 0 |
| 0% 23C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 24 NVIDIA A16 Off | 00000000:C5:00.0 Off | 0 |
| 0% 31C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 25 NVIDIA A16 Off | 00000000:C6:00.0 Off | 0 |
| 0% 33C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 26 NVIDIA A16 Off | 00000000:C7:00.0 Off | 0 |
| 0% 26C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 27 NVIDIA A16 Off | 00000000:C8:00.0 Off | 0 |
| 0% 23C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 28 NVIDIA A16 Off | 00000000:E5:00.0 Off | 0 |
| 0% 30C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 29 NVIDIA A16 Off | 00000000:E6:00.0 Off | 0 |
| 0% 31C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 30 NVIDIA A16 Off | 00000000:E7:00.0 Off | 0 |
| 0% 24C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 31 NVIDIA A16 Off | 00000000:E8:00.0 Off | 0 |
| 0% 21C P8 14W / 62W | 0MiB / 15356MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
±----------------------------------------------------------------------------+

Hi @genaro.costa
Sorry for late response! How did you select GPU id? Is it by command like: “export CUDA_VISIBLE_DEVICES=1” // select GPU id 1

Hi mchi,
when we run under the nvidia-docker, selecting only one NVIDIA A16 card, we have the id set inside the container. Bellow you have the variables in scenario that works:

[root@gpu01 ~]# docker run --rm --cap-add=ALL --gpus device=15 -it nvcr.io/nvidia/deepstream:6.1.1-base bash -c “export | grep NV”
declare -x NVARCH=“x86_64”
declare -x NVIDIA_DRIVER_CAPABILITIES=“compute,utility,video,compute,graphics,utility”
declare -x NVIDIA_REQUIRE_CUDA=“cuda>=11.7 brand=tesla,driver>=450,driver<451 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=unknown,driver>=510,driver<511 brand=nvidia,driver>=510,driver<511 brand=nvidiartx,driver>=510,driver<511 brand=quadrortx,driver>=510,driver<511”
declare -x NVIDIA_VISIBLE_DEVICES=“15”
declare -x NV_CUDA_COMPAT_PACKAGE=“cuda-compat-11-7”
declare -x NV_CUDA_CUDART_VERSION=“11.7.99-1”
declare -x NV_CUDA_LIB_VERSION=“11.7.1-1”
declare -x NV_LIBCUBLAS_PACKAGE=“libcublas-11-7=11.10.3.66-1”
declare -x NV_LIBCUBLAS_PACKAGE_NAME=“libcublas-11-7”
declare -x NV_LIBCUBLAS_VERSION=“11.10.3.66-1”
declare -x NV_LIBCUSPARSE_VERSION=“11.7.4.91-1”
declare -x NV_LIBNCCL_PACKAGE=“libnccl2=2.13.4-1+cuda11.7”
declare -x NV_LIBNCCL_PACKAGE_NAME=“libnccl2”
declare -x NV_LIBNCCL_PACKAGE_VERSION=“2.13.4-1”
declare -x NV_LIBNPP_PACKAGE=“libnpp-11-7=11.7.4.75-1”
declare -x NV_LIBNPP_VERSION=“11.7.4.75-1”
declare -x NV_NVTX_VERSION=“11.7.91-1”

And bellow you have the variables on the scenario that do now work.

[root@gpu01 ~]# docker run --rm --cap-add=ALL --gpus device=16 -it nvcr.io/nvidia/deepstream:6.1.1-base bash -c “export | grep NV”
declare -x NVARCH=“x86_64”
declare -x NVIDIA_DRIVER_CAPABILITIES=“compute,utility,video,compute,graphics,utility”
declare -x NVIDIA_REQUIRE_CUDA=“cuda>=11.7 brand=tesla,driver>=450,driver<451 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=unknown,driver>=510,driver<511 brand=nvidia,driver>=510,driver<511 brand=nvidiartx,driver>=510,driver<511 brand=quadrortx,driver>=510,driver<511”
declare -x NVIDIA_VISIBLE_DEVICES=“16”
declare -x NV_CUDA_COMPAT_PACKAGE=“cuda-compat-11-7”
declare -x NV_CUDA_CUDART_VERSION=“11.7.99-1”
declare -x NV_CUDA_LIB_VERSION=“11.7.1-1”
declare -x NV_LIBCUBLAS_PACKAGE=“libcublas-11-7=11.10.3.66-1”
declare -x NV_LIBCUBLAS_PACKAGE_NAME=“libcublas-11-7”
declare -x NV_LIBCUBLAS_VERSION=“11.10.3.66-1”
declare -x NV_LIBCUSPARSE_VERSION=“11.7.4.91-1”
declare -x NV_LIBNCCL_PACKAGE=“libnccl2=2.13.4-1+cuda11.7”
declare -x NV_LIBNCCL_PACKAGE_NAME=“libnccl2”
declare -x NV_LIBNCCL_PACKAGE_VERSION=“2.13.4-1”
declare -x NV_LIBNPP_PACKAGE=“libnpp-11-7=11.7.4.75-1”
declare -x NV_LIBNPP_VERSION=“11.7.4.75-1”
declare -x NV_NVTX_VERSION=“11.7.91-1”

Hi @genaro.costa ,
I think this “16” limitation comes from V4L2 lib - v4l-utils/libv4l2-priv.h at master · pboettch/v4l-utils · GitHub .

Fix in [mmapi] libv4l2: error attempting to open more then 16 video devices - #5 by Morganh could be a reference, but it’s for Jetson/ARM device, you may could refer to this to update V4L2 utils on your machine for this issue.
Meanwhile, I’ll check if we could have an official fix.

Thanks!

Hi @genaro.costa
Please try below steps to update v4l2 in the docker and check if this issue can be solved.

# git clone https://github.com/pboettch/v4l-utils.git
# cd v4l-utils/
**// apply below change**
# autoreconf -vfi
# ./configure --without-jpeg
# make
# make install
root@c09cdff56b13:~/v4l-utils# git diff
diff --git a/lib/libv4l2/libv4l2-priv.h b/lib/libv4l2/libv4l2-priv.h
index 343db5e1..8f03ae75 100644
--- a/lib/libv4l2/libv4l2-priv.h
+++ b/lib/libv4l2/libv4l2-priv.h
@@ -25,7 +25,7 @@

 #include "../libv4lconvert/libv4lsyscall-priv.h"

-#define V4L2_MAX_DEVICES 16
+#define V4L2_MAX_DEVICES 32
 /* Warning when making this larger the frame_queued and frame_mapped members of
    the v4l2_dev_info struct can no longer be a bitfield, so the code needs to
    be adjusted! */
diff --git a/lib/libv4l2/libv4l2.c b/lib/libv4l2/libv4l2.c
index 966a000c..921c7e32 100644
--- a/lib/libv4l2/libv4l2.c
+++ b/lib/libv4l2/libv4l2.c
@@ -90,6 +90,10 @@ static void v4l2_set_src_and_dest_format(int index,
 static pthread_mutex_t v4l2_open_mutex = PTHREAD_MUTEX_INITIALIZER;
 static struct v4l2_dev_info devices[V4L2_MAX_DEVICES] = {
        { .fd = -1 },
+       { .fd = -1 },
+       { .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 },
+       { .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 },
+       { .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 },
        { .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 },
        { .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 },
        { .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 }
diff --git a/lib/libv4lconvert/control/libv4lcontrol.c b/lib/libv4lconvert/control/libv4lcontrol.c
index 33bf9ce9..03a1e56f 100644
--- a/lib/libv4lconvert/control/libv4lcontrol.c
+++ b/lib/libv4lconvert/control/libv4lcontrol.c
@@ -32,6 +32,7 @@
 #include <unistd.h>
 #include <string.h>
 #include <pwd.h>
+#include <sys/sysmacros.h>
 #include "libv4lcontrol.h"
 #include "libv4lcontrol-priv.h"
 #include "../libv4lsyscall-priv.h"
diff --git a/utils/libmedia_dev/get_media_devices.c b/utils/libmedia_dev/get_media_devices.c
index e3a22006..72a79aa8 100644
--- a/utils/libmedia_dev/get_media_devices.c
+++ b/utils/libmedia_dev/get_media_devices.c
@@ -25,6 +25,7 @@
 #include <stdlib.h>
 #include <dirent.h>
 #include <limits.h>
+#include <sys/sysmacros.h>
 #include "get_media_devices.h"

 #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
diff --git a/utils/media-ctl/libmediactl.c b/utils/media-ctl/libmediactl.c
index ec360bd6..8bf7f7e8 100644
--- a/utils/media-ctl/libmediactl.c
+++ b/utils/media-ctl/libmediactl.c
@@ -33,6 +33,7 @@
 #include <stdlib.h>
 #include <string.h>
 #include <unistd.h>
+#include <sys/sysmacros.h>

 #include <linux/media.h>
 #include <linux/videodev2.h>

Hi,

I created the setup and patch bash script bellow:

#!/bin/bash

set -x 
rm -rf v4l-utils/
git clone https://github.com/pboettch/v4l-utils.git
cd v4l-utils/
cat > 32_cams.patch << EOF
diff --git a/lib/libv4l2/libv4l2-priv.h b/lib/libv4l2/libv4l2-priv.h
index 343db5e1..8f03ae75 100644
--- a/lib/libv4l2/libv4l2-priv.h
+++ b/lib/libv4l2/libv4l2-priv.h
@@ -25,7 +25,7 @@

 #include "../libv4lconvert/libv4lsyscall-priv.h"

-#define V4L2_MAX_DEVICES 16
+#define V4L2_MAX_DEVICES 32
 /* Warning when making this larger the frame_queued and frame_mapped members of
    the v4l2_dev_info struct can no longer be a bitfield, so the code needs to
    be adjusted! */
diff --git a/lib/libv4l2/libv4l2.c b/lib/libv4l2/libv4l2.c
index 966a000c..921c7e32 100644
--- a/lib/libv4l2/libv4l2.c
+++ b/lib/libv4l2/libv4l2.c
@@ -90,6 +90,10 @@ static void v4l2_set_src_and_dest_format(int index,
 static pthread_mutex_t v4l2_open_mutex = PTHREAD_MUTEX_INITIALIZER;
 static struct v4l2_dev_info devices[V4L2_MAX_DEVICES] = {
 	{ .fd = -1 },
+	{ .fd = -1 },
+	{ .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 },
+	{ .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 },
+	{ .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 },
 	{ .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 },
 	{ .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 },
 	{ .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 }, { .fd = -1 }
diff --git a/lib/libv4lconvert/control/libv4lcontrol.c b/lib/libv4lconvert/control/libv4lcontrol.c
index 33bf9ce9..03a1e56f 100644
--- a/lib/libv4lconvert/control/libv4lcontrol.c
+++ b/lib/libv4lconvert/control/libv4lcontrol.c
@@ -32,6 +32,7 @@
 #include <unistd.h>
 #include <string.h>
 #include <pwd.h>
+#include <sys/sysmacros.h>
 #include "libv4lcontrol.h"
 #include "libv4lcontrol-priv.h"
 #include "../libv4lsyscall-priv.h"
diff --git a/utils/libmedia_dev/get_media_devices.c b/utils/libmedia_dev/get_media_devices.c
index e3a22006..72a79aa8 100644
--- a/utils/libmedia_dev/get_media_devices.c
+++ b/utils/libmedia_dev/get_media_devices.c
@@ -25,6 +25,7 @@
 #include <stdlib.h>
 #include <dirent.h>
 #include <limits.h>
+#include <sys/sysmacros.h>
 #include "get_media_devices.h"

 #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
diff --git a/utils/media-ctl/libmediactl.c b/utils/media-ctl/libmediactl.c
index ec360bd6..8bf7f7e8 100644
--- a/utils/media-ctl/libmediactl.c
+++ b/utils/media-ctl/libmediactl.c
@@ -33,6 +33,7 @@
 #include <stdlib.h>
 #include <string.h>
 #include <unistd.h>
+#include <sys/sysmacros.h>

 #include <linux/media.h>
 #include <linux/videodev2.h>
EOF
git apply 32_cams.patch -vv || exit 1
autoreconf -vfi
./configure --prefix=/usr --libdir=/usr/lib/x86_64-linux-gnu/ --libexecdir=/usr/lib/x86_64-linux-gnu/ --without-jpeg
make
make install

Note that on the script configures the lib to the same places that the previous files are located.

root@0a109fe3d1c6:/build# ldconfig -p | grep v4l2
	libv4l2rds.so.0 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libv4l2rds.so.0
	libv4l2rds.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libv4l2rds.so
	libv4l2.so.0 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libv4l2.so.0
	libv4l2.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libv4l2.so

After the execution of the script that generates the new libraries, we had the same error.

root@0a109fe3d1c6:/build# gst-launch-1.0 -m rtspsrc location=rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mp4 ! rtph264depay ! nvv4l2decoder ! debugspy ! fakesink

(gst-plugin-scanner:16594): GStreamer-WARNING **: 13:28:15.369: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstchromaprint.so': libavcodec.so.58: cannot open shared object file: No such file or directory

(gst-plugin-scanner:16594): GStreamer-WARNING **: 13:28:15.613: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory

(gst-plugin-scanner:16594): GStreamer-WARNING **: 13:28:15.649: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory
Setting pipeline to PAUSED ...
ERROR: Pipeline doesn't want to pause.
Got message #10 from element "fakesink0" (state-changed): GstMessageStateChanged, old-state=(GstState)GST_STATE_NULL, new-state=(GstState)GST_STATE_READY, pending-state=(GstState)GST_STATE_VOID_PENDING;
Got message #11 from element "debugspy0" (state-changed): GstMessageStateChanged, old-state=(GstState)GST_STATE_NULL, new-state=(GstState)GST_STATE_READY, pending-state=(GstState)GST_STATE_VOID_PENDING;
Got message #12 from element "nvv4l2decoder0" (error): GstMessageError, gerror=(GError)NULL, debug=(string)"v4l2_calls.c\(651\):\ gst_v4l2_open\ \(\):\ /GstPipeline:pipeline0/nvv4l2decoder:nvv4l2decoder0:\012system\ error:\ No\ such\ file\ or\ directory";
ERROR: from element /GstPipeline:pipeline0/nvv4l2decoder:nvv4l2decoder0: Could not open device '/dev/nvidia0' for reading and writing.
Additional debug info:
v4l2_calls.c(651): gst_v4l2_open (): /GstPipeline:pipeline0/nvv4l2decoder:nvv4l2decoder0:
system error: No such file or directory
Setting pipeline to NULL ...
Freeing pipeline ...

Do you think if there are any other lib that was built static linked to the previous version?

Regards,
Genaro

Could you check “/dev/nvidia0” with “ls -l /dev/nvidia0” in the repo?

I don’t think so.

Hi mchi, bellow you have the nvidia device created on every GPU index.

[root@gpu02 ~]# for i in {0..31} ; do echo docker run --rm --cap-add=ALL --gpus '"device='${i}'"' -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c 'ls -l /dev/nvidia[0-9]*' ; docker run --rm --cap-add=ALL --gpus '"device='${i}'"' -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c 'ls -l /dev/nvidia[0-9]*' ; done 
docker run --rm --cap-add=ALL --gpus "device=0" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 8 Dec  7 18:53 /dev/nvidia8
docker run --rm --cap-add=ALL --gpus "device=1" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 9 Dec  7 18:53 /dev/nvidia9
docker run --rm --cap-add=ALL --gpus "device=2" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 10 Dec  7 18:53 /dev/nvidia10
docker run --rm --cap-add=ALL --gpus "device=3" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 11 Dec  7 18:53 /dev/nvidia11
docker run --rm --cap-add=ALL --gpus "device=4" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 12 Dec  7 18:53 /dev/nvidia12
docker run --rm --cap-add=ALL --gpus "device=5" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 13 Dec  7 18:53 /dev/nvidia13
docker run --rm --cap-add=ALL --gpus "device=6" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 14 Dec  7 18:53 /dev/nvidia14
docker run --rm --cap-add=ALL --gpus "device=7" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 15 Dec  7 18:53 /dev/nvidia15
docker run --rm --cap-add=ALL --gpus "device=8" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 4 Dec  7 18:53 /dev/nvidia4
docker run --rm --cap-add=ALL --gpus "device=9" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 5 Dec  7 18:53 /dev/nvidia5
docker run --rm --cap-add=ALL --gpus "device=10" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 6 Dec  7 18:53 /dev/nvidia6
docker run --rm --cap-add=ALL --gpus "device=11" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 7 Dec  7 18:53 /dev/nvidia7
docker run --rm --cap-add=ALL --gpus "device=12" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 0 Dec  7 18:53 /dev/nvidia0
docker run --rm --cap-add=ALL --gpus "device=13" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 1 Dec  7 18:53 /dev/nvidia1
docker run --rm --cap-add=ALL --gpus "device=14" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 2 Dec  7 18:53 /dev/nvidia2
docker run --rm --cap-add=ALL --gpus "device=15" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 3 Dec  7 18:53 /dev/nvidia3
docker run --rm --cap-add=ALL --gpus "device=16" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 24 Dec  7 18:53 /dev/nvidia24
docker run --rm --cap-add=ALL --gpus "device=17" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 25 Dec  7 18:53 /dev/nvidia25
docker run --rm --cap-add=ALL --gpus "device=18" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 26 Dec  7 18:53 /dev/nvidia26
docker run --rm --cap-add=ALL --gpus "device=19" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 27 Dec  7 18:53 /dev/nvidia27
docker run --rm --cap-add=ALL --gpus "device=20" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 28 Dec  7 18:53 /dev/nvidia28
docker run --rm --cap-add=ALL --gpus "device=21" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 29 Dec  7 18:53 /dev/nvidia29
docker run --rm --cap-add=ALL --gpus "device=22" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 30 Dec  7 18:53 /dev/nvidia30
docker run --rm --cap-add=ALL --gpus "device=23" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 31 Dec  7 18:53 /dev/nvidia31
docker run --rm --cap-add=ALL --gpus "device=24" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 20 Dec  7 18:53 /dev/nvidia20
docker run --rm --cap-add=ALL --gpus "device=25" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 21 Dec  7 18:53 /dev/nvidia21
docker run --rm --cap-add=ALL --gpus "device=26" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 22 Dec  7 18:53 /dev/nvidia22
docker run --rm --cap-add=ALL --gpus "device=27" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 23 Dec  7 18:53 /dev/nvidia23
docker run --rm --cap-add=ALL --gpus "device=28" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 16 Dec  7 18:53 /dev/nvidia16
docker run --rm --cap-add=ALL --gpus "device=29" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 17 Dec  7 18:53 /dev/nvidia17
docker run --rm --cap-add=ALL --gpus "device=30" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 18 Dec  7 18:53 /dev/nvidia18
docker run --rm --cap-add=ALL --gpus "device=31" -v /root/build/:/build -it -w /build nvcr.io/nvidia/deepstream:6.1.1-devel bash -c ls -l /dev/nvidia[0-9]*
crw-rw-rw- 1 root root 195, 19 Dec  7 18:53 /dev/nvidia19

Just to recap, on the ‘device’ greater than 15 we have the error.

The problem is caused by gst-v4l2. It is fixed by remove the device limitation in gst-v4l2

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.