Docker libv4l2: error getting capabilities

Docker libv4l2: error getting capabilities

  • Hardware Platform (Jetson / GPU)

    • Ubuntu 18.04

    • Jetpack 4.6.1 R32.7.1

      cat /etc/nv_tegra_release
      R32 (release), REVISION: 7.1, GCID: 29818004, BOARD: t186ref, EABI: aarch64, DATE: Sat Feb 19 17:07:00 UTC 202
      
  • Issue Type: bug

    Running a GStreamer decoding pipeline fails in docker but succeeds on the host.

    The following GStreamer pipeline, decodes a h264 stream and saves it to the local file system.

    On the host it seems to work flawless

    $ gst-launch-1.0 videotestsrc num-buffers=100 ! 'video/x-raw, format=UYVY, width=1920, height=1080, framerate=60/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=NV12' ! nvv4l2h264enc ! 'video/x-h264, stream-format=(string)byte-stream' ! h264parse ! nvv4l2decoder ! filesink location=/tmp/h264.test
    Setting pipeline to PAUSED ...
    Opening in BLOCKING MODE
    Opening in BLOCKING MODE
    Pipeline is PREROLLING ...
    NvMMLiteOpen : Block : BlockType = 261
    NVMEDIA: Reading vendor.tegra.display-size : status: 6
    NvMMLiteBlockCreate : Block : BlockType = 261
    Redistribute latency...
    NvMMLiteOpen : Block : BlockType = 4
    ===== NVMEDIA: NVENC =====
    NvMMLiteBlockCreate : Block : BlockType = 4
    H264: Profile = 66, Level = 0
    

    When we try to run the same pipeline in the l4t-base container the library files aren’t loaded properly. The pipeline fails and reports it isn’t able query the decoding device capabilities.

    docker run --rm --network=host --runtime=nvidia --privileged  nvcr.io/nvidia/l4t-base:r32.7.1  gst-launch-1.0 videotestsrc num-buffers=100 ! 'video/x-raw, format=UYVY, width=1920, height=1080, framerate=60/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=NV12' ! nvv4l2h264enc ! 'video/x-h264, stream-format=(string)byte-stream' ! h264parse ! nvv4l2decoder ! filesink location=/tmp/h264.test
    (Argus) Error FileOperationFailed: Connecting to nvargus-daemon failed: No such file or directory (in src/rpc/socket/client/SocketClientDispatch.cpp, function openSocketConnection(), line 205)
    (Argus) Error FileOperationFailed: Cannot create camera provider (in src/rpc/socket/client/SocketClientDispatch.cpp, function createCameraProvider(), line 106)
    Setting pipeline to PAUSED ...
    Failed to query video capabilities: Inappropriate ioctl for device
    libv4l2: error getting capabilities: Inappropriate ioctl for device
    ERROR: Pipeline doesn't want to pause.
    ERROR: from element /GstPipeline:pipeline0/nvv4l2decoder:nvv4l2decoder0: Error getting capabilities for device '/dev/nvhost-nvdec': It isn't a v4l2 driver. Check if it is a v4l1 driver.
    Additional debug info:
    /dvs/git/dirty/git-master_linux/3rdparty/gst/gst-v4l2/gst-v4l2/v4l2_calls.c(97): gst_v4l2_get_capabilities (): /GstPipeline:pipeline0/nvv4l2decoder:nvv4l2decoder0:
    system error: Inappropriate ioctl for device
    Setting pipeline to NULL ...
    Freeing pipeline ...
    

    When we run the same command with a sleep 1 && it does seem to work properly, why does this approach solve the previous issue? We would like to avoid running the container in this way, is this a possibility?

    docker run --rm --network=host --runtime=nvidia --privileged  nvcr.io/nvidia/l4t-base:r32.7.1 sleep 1 &&   gst-launch-1.0 videotestsrc num-buffers=100 ! 'video/x-raw, format=UYVY, width=1920, height=1080, framerate=60/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=NV12' ! nvv4l2h264enc ! 'video/x-h264, stream-format=(string)byte-stream' ! h264parse ! nvv4l2decoder ! filesink location=/tmp/h264.test
    Setting pipeline to PAUSED ...
    Opening in BLOCKING MODE
    Opening in BLOCKING MODE
    Pipeline is PREROLLING ...
    NvMMLiteOpen : Block : BlockType = 261
    NVMEDIA: Reading vendor.tegra.display-size : status: 6
    NvMMLiteBlockCreate : Block : BlockType = 261
    Redistribute latency...
    NvMMLiteOpen : Block : BlockType = 4
    ===== NVMEDIA: NVENC =====
    NvMMLiteBlockCreate : Block : BlockType = 4
    H264: Profile = 66, Level = 0
    ^Chandling interrupt.
    Interrupt: Stopping pipeline ...
    ERROR: pipeline doesn't want to preroll.
    Setting pipeline to NULL ...
    
1 Like

Noticed a very similar issue. Any updates?

The second command, the last part gst command actually run on your host, not within container.
docker run --rm --network=host --runtime=nvidia --privileged nvcr.io/nvidia/l4t-base:r32.7.1 sleep 1 && gst-launch-1.0 videotestsrc num-buffers=100 ! ‘video/x-raw, format=UYVY, width=1920, height=1080, framerate=60/1’ ! nvvidconv ! ‘video/x-raw(memory:NVMM), format=NV12’ ! nvv4l2h264enc ! ‘video/x-h264, stream-format=(string)byte-stream’ ! h264parse ! nvv4l2decoder ! filesink location=/tmp/h264.test

1 Like

Please use docker container based on compatible JP version and DS version.
https://docs.nvidia.com/metropolis/deepstream/6.0.1/dev-guide/text/DS_docker_containers.html#a-docker-container-for-jetson

Hi @Amycao , Running the pipeline with the deepstream-l4t:6.0.1-base image results in the same issue.


(Argus) Error FileOperationFailed: Connecting to nvargus-daemon failed: No such file or directory (in src/rpc/socket/client/SocketClientDispatch.cpp, function openSocketConnection(), line 205)

(Argus) Error FileOperationFailed: Cannot create camera provider (in src/rpc/socket/client/SocketClientDispatch.cpp, function createCameraProvider(), line 106)

(gst-plugin-scanner:21): GStreamer-WARNING **: 07:58:42.940: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_deepstream_bins.so': libnvparsers.so.8: cannot open shared object file: No such file or directory

(gst-plugin-scanner:21): GStreamer-WARNING **: 07:58:42.958: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferaudio.so': libcufft.so.10: cannot open shared object file: No such file or directory

(gst-plugin-scanner:21): GStreamer-WARNING **: 07:58:42.962: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so': libnvparsers.so.8: cannot open shared object file: No such file or directory

(gst-plugin-scanner:21): GStreamer-WARNING **: 07:58:42.970: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:21): GStreamer-WARNING **: 07:58:42.985: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory

(gst-plugin-scanner:21): GStreamer-WARNING **: 07:58:43.037: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_osd.so': libnvinfer.so.8: cannot open shared object file: No such file or directory

Setting pipeline to PAUSED ...

Failed to query video capabilities: Inappropriate ioctl for device

libv4l2: error getting capabilities: Inappropriate ioctl for device

ERROR: Pipeline doesn't want to pause.

ERROR: from element /GstPipeline:pipeline0/nvv4l2decoder:nvv4l2decoder0: Error getting capabilities for device '/dev/nvhost-nvdec': It isn't a v4l2 driver. Check if it is a v4l1 driver.

Additional debug info:

/dvs/git/dirty/git-master_linux/3rdparty/gst/gst-v4l2/gst-v4l2/v4l2_calls.c(97): gst_v4l2_get_capabilities (): /GstPipeline:pipeline0/nvv4l2decoder:nvv4l2decoder0:

system error: Inappropriate ioctl for device

Setting pipeline to NULL ...

Freeing pipeline `...`

Can you find the library libnvinfer.so.8, libnvparsers.so.8, libcufft.so.10 within container?

@Amycao
The library files are not available within the container


nvidia-docker run --network=host -it --runtime=nvidia nvcr.io/nvidia/deepstream-l4t:6.0.1-base /bin/bash

/opt/nvidia/deepstream/deepstream-6.0# find / -name libnvinfer.so.8

/opt/nvidia/deepstream/deepstream-6.0# find / -name libnvparsers.so.8

/opt/nvidia/deepstream/deepstream-6.0# find / -name libcufft.so.10

On the host I do find the files, stored on the following paths:

/usr/local/cuda-10.2/targets/aarch64-linux/lib/libcufft.so.10

/usr/lib/aarch64-linux-gnu/libnvparsers.so.8

/usr/lib/aarch64-linux-gnu/libnvinfer.so.8

How about this?
docker run --network=host -it --runtime=nvidia nvcr.io/nvidia/deepstream-l4t:6.0.1-base /bin/bash

Cannot find those files either:

$ docker run --network=host -it --runtime=nvidia nvcr.io/nvidia/deepstream-l4t:6.0.1-base /bin/bash
/opt/nvidia/deepstream/deepstream-6.0# find / -name libnvinfer.so.8
/opt/nvidia/deepstream/deepstream-6.0# find / -name libnvparsers.so.8
/opt/nvidia/deepstream/deepstream-6.0# find / -name libcufft.so.10
1 Like

Any updates?

How about your docker version?
dpkg -l|grep docker

ii  docker                                     1.5-1build1                                arm64        System tray for KDE3/GNOME2 docklet applications
ii  docker.io                                  20.10.7-0ubuntu5~18.04.3                   arm64        Linux container runtime
ii  nvidia-docker2                             2.11.0-1                                   all          nvidia-docker CLI wrapper

This comes with JP 4.6.1. please run nvidia-docker run --rm -it IMAGE ID, if not work, please try to reinstall it to see if works.
ubuntu@ubuntu-All-Series:~/Downloads/nvidia/sdkm_downloads$ ls |grep -E ‘container|docker’
libnvidia-container0_0.10.0+jetpack_arm64.deb
libnvidia-container1_1.7.0-1_arm64.deb
libnvidia-container-tools_1.7.0-1_arm64.deb
nvidia-container-csv-cuda_10.2.460-1_arm64.deb
nvidia-container-csv-cudnn_8.2.1.32-1+cuda10.2_arm64.deb
nvidia-container-csv-tensorrt_8.2.1.8-1+cuda10.2_arm64.deb
nvidia-container-csv-visionworks_1.6.0.501_arm64.deb
nvidia-container-runtime_3.7.0-1_all.deb
nvidia-container-toolkit_1.7.0-1_arm64.deb
nvidia-docker2_2.8.0-1_all.deb

Hi Amycao. Could you please provide instructions on how to reinstall nvidia-docker instead of listing your downloads?

I have downloaded

  • https://repo.download.nvidia.com/jetson/common/pool/main/n/nvidia-container-toolkit/nvidia-container-toolkit_1.7.0-1_arm64.deb

  • https://repo.download.nvidia.com/jetson/common/pool/main/n/nvidia-docker2/nvidia-docker2_2.8.0-1_all.deb

And installed both with sudo dpkg -i <.deb> and now everything seems to work but I don’t understand how. Could you read through the steps below?

Steps to reproduce:

$ sudo apt-get autoremove nvidia-container-toolkit
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be REMOVED:
  nvidia-container-runtime nvidia-container-toolkit nvidia-container-toolkit-base nvidia-docker2
0 upgraded, 0 newly installed, 4 to remove and 162 not upgraded.
After this operation, 9.654 kB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database ... 178374 files and directories currently installed.)
Removing nvidia-container-runtime (3.11.0-1) ...
Removing nvidia-docker2 (2.11.0-1) ...
Removing nvidia-container-toolkit (1.11.0-1) ...
Removing nvidia-container-toolkit-base (1.11.0-1) ...
$ sudo dpkg -i nvidia-container-toolkit_1.7.0-1_arm64.deb
Selecting previously unselected package nvidia-container-toolkit.
(Reading database ... 178355 files and directories currently installed.)
Preparing to unpack nvidia-container-toolkit_1.7.0-1_arm64.deb ...
Unpacking nvidia-container-toolkit (1.7.0-1) ...
Setting up nvidia-container-toolkit (1.7.0-1) ...
Installing new version of config file /etc/nvidia-container-runtime/config.toml ...
$ sudo dpkg -i nvidia-docker2_2.8.0-1_all.deb
Selecting previously unselected package nvidia-docker2.
(Reading database ... 178361 files and directories currently installed.)
Preparing to unpack nvidia-docker2_2.8.0-1_all.deb ...
Unpacking nvidia-docker2 (2.8.0-1) ...
Setting up nvidia-docker2 (2.8.0-1) ...
  • Checking versions:
$ dpkg -l|grep -E 'container|docker'
ii  containerd                                 1.5.5-0ubuntu3~18.04.1                     arm64        daemon to control runC
ii  docker                                     1.5-1build1                                arm64        System tray for KDE3/GNOME2 docklet applications
ii  docker.io                                  20.10.7-0ubuntu5~18.04.3                   arm64        Linux container runtime
ii  libavformat57:arm64                        7:3.4.8-0ubuntu0.2                         arm64        FFmpeg library with (de)muxers for multimedia containers - runtime files
ii  libkf5wallet-bin                           5.44.0-0ubuntu1                            arm64        Secure and unified container for user passwords.
ii  libkf5wallet-data                          5.44.0-0ubuntu1                            all          Secure and unified container for user passwords.
ii  libkf5wallet5:arm64                        5.44.0-0ubuntu1                            arm64        Secure and unified container for user passwords.
ii  libkwalletbackend5-5:arm64                 5.44.0-0ubuntu1                            arm64        Secure and unified container for user passwords.
ii  libmatroska6v5:arm64                       1.4.8-1.1                                  arm64        extensible open standard audio/video container format (shared library)
ii  libnvidia-container-tools                  1.11.0-1                                   arm64        NVIDIA container runtime library (command-line tools)
ii  libnvidia-container0:arm64                 0.10.0+jetpack                             arm64        NVIDIA container runtime library
ii  libnvidia-container1:arm64                 1.11.0-1                                   arm64        NVIDIA container runtime library
ii  nvidia-container-csv-cuda                  10.2.460-1                                 arm64        Jetpack CUDA CSV file
ii  nvidia-container-csv-cudnn                 8.2.1.32-1+cuda10.2                        arm64        Jetpack CUDNN CSV file
ii  nvidia-container-csv-tensorrt              8.2.1.8-1+cuda10.2                         arm64        Jetpack TensorRT CSV file
ii  nvidia-container-csv-visionworks           1.6.0.501                                  arm64        Jetpack VisionWorks CSV file
ii  nvidia-container-toolkit                   1.7.0-1                                    arm64        NVIDIA container runtime hook
rc  nvidia-container-toolkit-base              1.11.0-1                                   arm64        NVIDIA Container Toolkit Base
ii  nvidia-docker2                             2.8.0-1                                    all          nvidia-docker CLI wrapper
  • Testing pipeline (working):
$ docker run --rm --network=host --runtime=nvidia --privileged  nvcr.io/nvidia/l4t-base:r32.7.1  gst-launch-1.0 videotestsrc num-buffers=100 ! 'video/x-raw, format=UYVY, width=1920, height=1080, framerate=60/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=NV12' ! nvv4l2h264enc ! 'video/x-h264, stream-format=(string)byte-stream' ! h264parse ! nvv4l2decoder ! filesink location=/tmp/h264.test
(Argus) Error FileOperationFailed: Connecting to nvargus-daemon failed: No such file or directory (in src/rpc/socket/client/SocketClientDispatch.cpp, function openSocketConnection(), line 205)
(Argus) Error FileOperationFailed: Cannot create camera provider (in src/rpc/socket/client/SocketClientDispatch.cpp, function createCameraProvider(), line 106)
Setting pipeline to PAUSED ...
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Pipeline is PREROLLING ...
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 4
Redistribute latency...
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
H264: Profile = 66, Level = 0

However, if I now upgrade the above mentioned packages everything still works:

  • Upgrade nvidia-containter-toolkit:
$ sudo apt-get install nvidia-container-toolkit
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  nvidia-container-toolkit-base
The following NEW packages will be installed:
  nvidia-container-toolkit-base
The following packages will be upgraded:
  nvidia-container-toolkit
1 upgraded, 1 newly installed, 0 to remove and 163 not upgraded.
Need to get 2.239 kB of archives.
After this operation, 5.281 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 https://nvidia.github.io/libnvidia-container/stable/ubuntu18.04/arm64  nvidia-container-toolkit 1.11.0-1 [635 kB]
Get:2 https://nvidia.github.io/libnvidia-container/stable/ubuntu18.04/arm64  nvidia-container-toolkit-base 1.11.0-1 [1.603 kB]
Fetched 2.239 kB in 7s (326 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
(Reading database ... 178366 files and directories currently installed.)
Preparing to unpack .../nvidia-container-toolkit_1.11.0-1_arm64.deb ...
Unpacking nvidia-container-toolkit (1.11.0-1) over (1.7.0-1) ...
Selecting previously unselected package nvidia-container-toolkit-base.
Preparing to unpack .../nvidia-container-toolkit-base_1.11.0-1_arm64.deb ...
Unpacking nvidia-container-toolkit-base (1.11.0-1) ...
Setting up nvidia-container-toolkit-base (1.11.0-1) ...
Installing new version of config file /etc/nvidia-container-runtime/config.toml ...
Setting up nvidia-container-toolkit (1.11.0-1) ...
  • Upgrading nvidia-docker2:
$ sudo apt-get install nvidia-docker2
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
  nvidia-docker2
1 upgraded, 0 newly installed, 0 to remove and 162 not upgraded.
Need to get 0 B/5.544 B of archives.
After this operation, 0 B of additional disk space will be used.
debconf: delaying package configuration, since apt-utils is not installed
(Reading database ... 178370 files and directories currently installed.)
Preparing to unpack .../nvidia-docker2_2.11.0-1_all.deb ...
Unpacking nvidia-docker2 (2.11.0-1) over (2.8.0-1) ...
Setting up nvidia-docker2 (2.11.0-1) ...
  • Check versions:
$ dpkg -l|grep -E 'container|docker'
ii  containerd                                 1.5.5-0ubuntu3~18.04.1                     arm64        daemon to control runC
ii  docker                                     1.5-1build1                                arm64        System tray for KDE3/GNOME2 docklet applications
ii  docker.io                                  20.10.7-0ubuntu5~18.04.3                   arm64        Linux container runtime
ii  libavformat57:arm64                        7:3.4.8-0ubuntu0.2                         arm64        FFmpeg library with (de)muxers for multimedia containers - runtime files
ii  libkf5wallet-bin                           5.44.0-0ubuntu1                            arm64        Secure and unified container for user passwords.
ii  libkf5wallet-data                          5.44.0-0ubuntu1                            all          Secure and unified container for user passwords.
ii  libkf5wallet5:arm64                        5.44.0-0ubuntu1                            arm64        Secure and unified container for user passwords.
ii  libkwalletbackend5-5:arm64                 5.44.0-0ubuntu1                            arm64        Secure and unified container for user passwords.
ii  libmatroska6v5:arm64                       1.4.8-1.1                                  arm64        extensible open standard audio/video container format (shared library)
ii  libnvidia-container-tools                  1.11.0-1                                   arm64        NVIDIA container runtime library (command-line tools)
ii  libnvidia-container0:arm64                 0.10.0+jetpack                             arm64        NVIDIA container runtime library
ii  libnvidia-container1:arm64                 1.11.0-1                                   arm64        NVIDIA container runtime library
ii  nvidia-container-csv-cuda                  10.2.460-1                                 arm64        Jetpack CUDA CSV file
ii  nvidia-container-csv-cudnn                 8.2.1.32-1+cuda10.2                        arm64        Jetpack CUDNN CSV file
ii  nvidia-container-csv-tensorrt              8.2.1.8-1+cuda10.2                         arm64        Jetpack TensorRT CSV file
ii  nvidia-container-csv-visionworks           1.6.0.501                                  arm64        Jetpack VisionWorks CSV file
ii  nvidia-container-toolkit                   1.11.0-1                                   arm64        NVIDIA Container toolkit
ii  nvidia-container-toolkit-base              1.11.0-1                                   arm64        NVIDIA Container Toolkit Base
ii  nvidia-docker2                             2.11.0-1                                   all          nvidia-docker CLI wrapper
  • Running pipeline (working):
$ docker run --rm --network=host --runtime=nvidia --privileged  nvcr.io/nvidia/l4t-base:r32.7.1  gst-launch-1.0 videotestsrc num-buffers=100 ! 'video/x-raw, format=UYVY, width=1920, height=1080, framerate=60/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=NV12' ! nvv4l2h264enc ! 'video/x-h264, stream-format=(string)byte-stream' ! h264parse ! nvv4l2decoder ! filesink location=/tmp/h264.test
(Argus) Error FileOperationFailed: Connecting to nvargus-daemon failed: No such file or directory (in src/rpc/socket/client/SocketClientDispatch.cpp, function openSocketConnection(), line 205)
(Argus) Error FileOperationFailed: Cannot create camera provider (in src/rpc/socket/client/SocketClientDispatch.cpp, function createCameraProvider(), line 106)
Setting pipeline to PAUSED ...
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Pipeline is PREROLLING ...
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
Redistribute latency...
NvMMLiteBlockCreate : Block : BlockType = 4
H264: Profile = 66, Level = 0

In short: we seem to have fixed it (thanks!) but we don’t know exactly how we fixed it, making it hard to prevent this from happening further in the future. Could you check what could have possibly gone wrong in the original setup?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

i am not sure what could gone wrong in your original setup. but suggest you flash with whole JP version next time, it will not have this version issue.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.