Libv4l2: error attempting to open more than 16 video devices

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU
• DeepStream Version
6.3.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
8.4
• NVIDIA GPU Driver Version (valid for GPU only)
535.54.03
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

My server was able to run 100-channel monitoring in previous tests, but today I suddenly found that the multi-channel monitoring program reported an error. I didn’t change the code, and I probably didn’t change the system settings (the machine is public, I’m not sure about this). Why did it appear as follows?

3-11-10 17:15:28 - INFO - Decodebin child added: capsfilter13
2023-11-10 17:15:29 - INFO - Decodebin child added: decodebin15
2023-11-10 17:15:29 - INFO - Decodebin child added: rtph264depay13
2023-11-10 17:15:29 - INFO - Decodebin child added: h264parse13
2023-11-10 17:15:29 - INFO - Decodebin child added: capsfilter14
2023-11-10 17:15:29 - INFO - Decodebin child added: decodebin16
2023-11-10 17:15:29 - INFO - Decodebin child added: rtph264depay14
2023-11-10 17:15:29 - INFO - Decodebin child added: h264parse14
2023-11-10 17:15:29 - INFO - Decodebin child added: capsfilter15
2023-11-10 17:15:29 - INFO - Decodebin child added: nvv4l2decoder8
2023-11-10 17:15:29 - INFO - only decode key frame
2023-11-10 17:15:29 - INFO - Decodebin child added: decodebin17
2023-11-10 17:15:29 - INFO - Decodebin child added: rtph264depay15
2023-11-10 17:15:29 - INFO - Decodebin child added: h264parse15
2023-11-10 17:15:29 - INFO - Decodebin child added: capsfilter16
2023-11-10 17:15:29 - INFO - Decodebin child added: decodebin18
2023-11-10 17:15:29 - INFO - Decodebin child added: rtppcmadepay0
2023-11-10 17:15:29 - INFO - Decodebin child added: alawdec0
2023-11-10 17:15:29 - INFO - Decodebin child added: decodebin19
2023-11-10 17:15:29 - INFO - Decodebin child added: rtph264depay16
2023-11-10 17:15:29 - INFO - Decodebin child added: h264parse16
2023-11-10 17:15:29 - INFO - Decodebin child added: capsfilter17
2023-11-10 17:15:29 - INFO - Decodebin child added: nvv4l2decoder9
2023-11-10 17:15:29 - INFO - only decode key frame
2023-11-10 17:15:29 - INFO - Decodebin child added: decodebin20
2023-11-10 17:15:29 - INFO - Decodebin child added: rtppcmadepay1
2023-11-10 17:15:29 - INFO - Decodebin child added: alawdec1
2023-11-10 17:15:29 - INFO - Decodebin child added: decodebin21
2023-11-10 17:15:29 - INFO - Decodebin child added: rtph264depay17
2023-11-10 17:15:29 - INFO - Decodebin child added: h264parse17
2023-11-10 17:15:29 - INFO - Decodebin child added: capsfilter18
2023-11-10 17:15:29 - INFO - Decodebin child added: nvv4l2decoder10
2023-11-10 17:15:29 - INFO - only decode key frame
2023-11-10 17:15:29 - INFO - Decodebin child added: decodebin22
2023-11-10 17:15:29 - INFO - Decodebin child added: rtppcmadepay2
2023-11-10 17:15:29 - INFO - Decodebin child added: alawdec2
2023-11-10 17:15:29 - INFO - image_put_read_only_path: /ds_outputs/stream_17/frame_0.jpg
2023-11-10 17:15:31 - INFO - Decodebin child added: nvv4l2decoder11
2023-11-10 17:15:31 - INFO - only decode key frame
2023-11-10 17:15:31 - INFO - Decodebin child added: nvv4l2decoder12
2023-11-10 17:15:31 - INFO - only decode key frame
2023-11-10 17:15:31 - INFO - Decodebin child added: nvv4l2decoder13
2023-11-10 17:15:31 - INFO - only decode key frame
2023-11-10 17:15:31 - INFO - Decodebin child added: nvv4l2decoder14
2023-11-10 17:15:31 - INFO - only decode key frame
2023-11-10 17:15:31 - INFO - Decodebin child added: nvv4l2decoder15
2023-11-10 17:15:31 - INFO - only decode key frame
2023-11-10 17:15:31 - INFO - Decodebin child added: nvv4l2decoder16
2023-11-10 17:15:31 - INFO - only decode key frame
libv4l2: error attempting to open more than 16 video devices
2023-11-10 17:15:31 - INFO - Decodebin child added: h265parse1
2023-11-10 17:15:31 - INFO - Decodebin child added: nvv4l2decoder17
2023-11-10 17:15:31 - INFO - Decodebin child added: capsfilter19
2023-11-10 17:15:31 - INFO - only decode key frame
libv4l2: error attempting to open more than 16 video devices
2023-11-10 17:15:31 - INFO - Decodebin child added: nvv4l2decoder18
2023-11-10 17:15:31 - INFO - only decode key frame
2023-11-10 17:15:31 - INFO - Decodebin child added: avdec_h264-1
2023-11-10 17:15:31 - INFO - Decodebin child added: avdec_h264-0
libv4l2: error attempting to open more than 16 video devices
2023-11-10 17:15:31 - INFO - Decodebin child added: avdec_h265-0
2023-11-10 17:15:31 - INFO - Error: err:gst-stream-error-quark: NvStreamMux does not suppport raw buffers. Use nvvideoconvert before NvStreamMux to convert to NVMM buffers (5),debug:gstnvstreammux.cpp(1233): gst_nvstreammux_sink_event (): /GstPipeline:pipeline0/GstNvStreamMux:Stream-muxer

There is no clue from your log.

Please check your system to find more clues.

The configuration information is as follows

deepstream-app version 6.3.0
DeepStreamSDK 6.3.0
CUDA Driver Version: 12.2
CUDA Runtime Version: 12.1
TensorRT Version: 8.4
cuDNN Version: 8.4
libNVWarp360 Version: 2.0.1d3

Please strictly follow the compatibility;Quickstart Guide — DeepStream 6.3 Release documentation

For some reasons, my driver version is a little higher. However, when I installed the environment, the program ran as expected, and I have a machine with the same configuration that is running the business normally, but this machine suddenly reported an exception for no reason.

‘libv4l2: error attempting to open more than 16 video devices’
This looks like a system level limitation?

This looks like something break the process stack.

Sorry, I don’t quite understand. Can you say more?

With my experience, such error happens with the corrupted process stack. Elements of a process (swarthmore.edu). We don’t know the reason without related information or clues.

There is no clue in your description. You need to debug by yourself first.

Hello, I reinstalled the environment according to the tutorial, but why are the tensorRT and cuDNN versions still 8.4? How to uninstall them completely?

deepstream-app version 6.3.0
DeepStreamSDK 6.3.0
CUDA Driver Version: 12.2
CUDA Runtime Version: 12.1
TensorRT Version: 8.4
cuDNN Version: 8.4
libNVWarp360 Version: 2.0.1d3

Installation Guide :: NVIDIA Deep Learning TensorRT Documentation

1. Introduction — Installation Guide for Linux 12.3 documentation (nvidia.com)

1. Introduction — Installation Guide for Linux 12.3 documentation (nvidia.com)

Thanks a lot for the link, I’ll give it a try

I uninstalled cuda according to the following steps in the link you gave, and then installed cuda according to the official tutorial. After the installation, the cuda version is 12.1, but the cuDNN version is 8.4. Why is this cuDNN version wrong?

1.sudo apt-get --purge remove "*cuda*" "*cublas*" "*cufft*" "*cufile*" "*curand*" \
 "*cusolver*" "*cusparse*" "*gds-tools*" "*npp*" "*nvjpeg*" "nsight*" "*nvvm*"
2.sudo apt-get --purge remove "*nvidia*" "libxnvctrl*"
3.sudo apt-get autoremove

The ‘libv4l2: error attempting to open more than 16 video devices’ exception appears in the program. Is it possible that the cuDNN version and tensorRT version are too low? Because in my other machine with the same configuration and normal operation, its cuDNN version is 8.7, tensorRT version is 8.5, and other software environments are the same. Apart from this, I can’t think of any other reasons for the exception.

I completely uninstalled TensorRT, and then executed the following command to install TensorRT according to the Deepstream 6.3 tutorial. Theoretically it should be version 8.5, but after actual installation, it was still 8.4. I was very confused about this:

sudo apt-get install libnvinfer8=8.5.3-1+cuda11.8 libnvinfer-plugin8=8.5.3-1+cuda11.8 libnvparsers8=8.5.3-1+cuda11.8 \
libnvonnxparsers8=8.5.3-1+cuda11.8 libnvinfer-bin=8.5.3-1+cuda11.8 libnvinfer-dev=8.5.3-1+cuda11.8 \
libnvinfer-plugin-dev=8.5.3-1+cuda11.8 libnvparsers-dev=8.5.3-1+cuda11.8 libnvonnxparsers-dev=8.5.3-1+cuda11.8 \
libnvinfer-samples=8.5.3-1+cuda11.8 libcudnn8=8.7.0.84-1+cuda11.8 libcudnn8-dev=8.7.0.84-1+cuda11.8 \
python3-libnvinfer=8.5.3-1+cuda11.8 python3-libnvinfer-dev=8.5.3-1+cuda11.8

I tried many solutions, including installing the R525.125.06 graphics card driver strictly following the official Deepstream 6.3 tutorial. However, after executing the cuda and tensorRT instructions in the tutorial, the cuDNN version 8.4 is finally displayed, and tensorRT is also version 8.4. why is it like this?

This problem should be caused by damage to the basic environment. I reinstalled Ubuntu and other basic components, and now it can run normally.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.