Docker image with deepstream:6.0.1 that contains CUDA dependencies for Jetson Nano

I want to create an image with Deepstream 6.0.1 that contains all the requirements like CUDA and TensorRT.

I’m working on a Jetson Nano DevKit 4GB with all the JetPack installed, but the target Jetson have only a part of JetPack and don’t have CUDA libs. The devices are in headless mode, no display is used and will not be used at all.

I see that docker run -it --gpus=all nvcr.io/nvidia/deepstream-l4t:6.0.1-base works just fine. But when I try to run the custom image I have the nvbufsurftransform: Could not get EGL display connection error and can’t load the nvinfer plugin No such element or plugin 'nvinfer'. DISPLAY env is unset.

Does anybody see what is the problem or know how to build such an image?

Board: Jetson Nano DevKit 4GB
JetPack: 4.6.4
L4T: 32.7.4
DIstribution: Ubuntu 18.04
Kernel: Linux version 4.9.337-tegra (buildbrain@mobile-u64-5434-d8000) (gcc version 7.3.1 20180425 [linaro-7.3-2018.05 revision d29120a424ecfbc167ef90065c0eeb7f91977701] (Linaro GCC 7.3-2018.05) )
CUDA: 10.2.300
cuDNN: 8.2.1.32
TensorRT: 8.2.1.9
VPI: 1.2.3

How to replicate:
Build the image and run docker run -it --gpus=all <image-name> gst-inspect-1.0 nvinfer

The Dockerfile I use:

FROM arm64v8/ubuntu:bionic as base

WORKDIR /app

RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
    && apt-get -y install --no-install-recommends \
    \
    python3-pip \
    \
    && apt-get clean -y && rm -rf /var/lib/apt/lists/*

FROM base as l4t

ADD https://repo.download.nvidia.com/jetson/jetson-ota-public.asc /etc/apt/trusted.gpg.d/jetson-ota-public.asc
RUN chmod 644 /etc/apt/trusted.gpg.d/jetson-ota-public.asc \
    \
    && echo "deb https://repo.download.nvidia.com/jetson/common r32.7 main" > /etc/apt/sources.list.d/nvidia-l4t-apt-source.list \
    && echo "deb https://repo.download.nvidia.com/jetson/t210 r32.7 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list \
    && mkdir -p /opt/nvidia/l4t-packages/ && touch /opt/nvidia/l4t-packages/.nv-l4t-disable-boot-fw-update-in-preinstall \
    \
    && apt-get update && export DEBIAN_FRONTEND=noninteractive \
    && apt-get -y install --no-install-recommends -o Dpkg::Options::=--force-overwrite \
    nvidia-l4t-cuda nvidia-l4t-gstreamer \
    \
    && apt-get clean -y && rm -rf /var/lib/apt/lists/* \
    && echo "/usr/lib/aarch64-linux-gnu/tegra" > /etc/ld.so.conf.d/nvidia-tegra.conf && ldconfig

ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES all

FROM l4t as cuda

RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
    && apt-get -y install --no-install-recommends \
    \
    nvidia-cuda nvidia-cudnn8 \
    \
    && apt-get clean -y && rm -rf /var/lib/apt/lists/*

FROM cuda as tensorrt

RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
    && apt-get -y install --no-install-recommends \
    \
    nvidia-tensorrt \
    \
    && apt-get clean -y && rm -rf /var/lib/apt/lists/*

FROM tensorrt as deepstream

RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
    && apt-get -y install --no-install-recommends \
    \
    libssl1.0.0 \
    libgstreamer1.0-0 \
    gstreamer1.0-tools \
    gstreamer1.0-plugins-good \
    gstreamer1.0-plugins-bad \
    gstreamer1.0-plugins-ugly \
    gstreamer1.0-libav \
    libgstreamer-plugins-base1.0-dev \
    libgstrtspserver-1.0-0 \
    libjansson4=2.11-1 \
    \
    curl \
    \
    && apt-get clean -y && rm -rf /var/lib/apt/lists/*

RUN curl -sO https://developer.download.nvidia.com/assets/Deepstream/DeepStream_6.0.1/deepstream_sdk_v6.0.1_jetson.tbz2 \
    && tar -xvf deepstream_sdk_v6.0.1_jetson.tbz2 -C / \
    && cd /opt/nvidia/deepstream/deepstream-6.0 \
    && ./install.sh \
    && ldconfig

Error log:

$ gst-inspect-1.0 nvinfer
nvbuf_utils: Could not get EGL display connection
(Argus) Error FileOperationFailed: Connecting to nvargus-daemon failed: No such file or directory (in src/rpc/socket/client/SocketClientDispatch.cpp, function openSocketConnection(), line 205)
(Argus) Error FileOperationFailed: Cannot create camera provider (in src/rpc/socket/client/SocketClientDispatch.cpp, function createCameraProvider(), line 106)
nvbufsurftransform: Could not get EGL display connection
nvbufsurftransform: Could not get EGL display connection
nvbufsurftransform: Could not get EGL display connection
nvbufsurftransform: Could not get EGL display connection
nvbufsurftransform: Could not get EGL display connection
nvbufsurftransform: Could not get EGL display connection
nvbufsurftransform: Could not get EGL display connection
nvbufsurftransform: Could not get EGL display connection
nvbufsurftransform: Could not get EGL display connection
nvbufsurftransform: Could not get EGL display connection
nvbufsurftransform: Could not get EGL display connection
nvbufsurftransform: Could not get EGL display connection

(gst-plugin-scanner:791): GStreamer-WARNING **: 07:56:04.153: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory
nvbufsurftransform: Could not get EGL display connection

(gst-plugin-scanner:793): GStreamer-WARNING **: 07:56:04.586: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory
nvbufsurftransform: Could not get EGL display connection
nvbufsurftransform: Could not get EGL display connection
nvbufsurftransform: Could not get EGL display connection
nvbufsurftransform: Could not get EGL display connection
nvbufsurftransform: Could not get EGL display connection
nvbufsurftransform: Could not get EGL display connection
nvbufsurftransform: Could not get EGL display connection
nvbufsurftransform: Could not get EGL display connection
No such element or plugin 'nvinfer'

Is it somehow related to nvidia-l4t-oem-config setup? How can I properly set headless mode in the container?

I think this issue is not caused by headless mode.Try the CLI below to start docker.

docker run -it --rm --net=host --runtime nvidia  -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream  -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:6.0.1-triton

You don’t have to build docker from scratch, please refer to the base image of NGC

@junshengy Thank you for the reply!
Unfortunately your command is not working since, as said in my first post, I do not use display at all and it won’t be used. It is a mere ssh connection, with no X forwarding.

I can make your example workable if I unset DISPLAY environment variable. By the way deepstream-l4t:6.0.1 images mount host jetson libraries like CUDA inside the container, as stated in the NOTES:

These containers use the NVIDIA Container Runtime for Jetson to run DeepStream applications. The NVIDIA Container Toolkit seamlessly expose specific parts of the device (i.e. BSP) to the DeepStream container, giving the applications resources need to run the application.

The behavior is not acceptable in my case, as the target Jetson Nano machine I will use does not have the CUDA installed (it uses a custom bootloader image).

I want to highlight that I am currently working on the Jetson (see specs in the first message) configured with NVIDIA SDK Manager. And even on that device the behavior is the same.

@junshengy Any ideas?

This problem has nothing to do with the display. This is an error during video memory initialization.

Do you mean that you did not install CUDA on Jetson?

On jeton, cuda is shared between the host and docker, so CUDA must be installed on the host first.
otherwise docker cannot run normally.

On the host is installed everything from the Linux_for_Tegra/nv_tegra/l4t_deb_packages folder during apply_binaries.sh script run:

  • nvidia-l4t-3d-core_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-apt-source_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-camera_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-configs_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-core_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-cuda_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-firmware_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-gputools_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-graphics-demos_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-gstreamer_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-init_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-initrd_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-jetson-io_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-libvulkan_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-multimedia-utils_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-multimedia_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-oem-config_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-tools_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-wayland_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-weston_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-x11_32.7.3-20221122092935_arm64.deb
  • nvidia-l4t-xusb-firmware_32.7.3-20221122092935_arm64.deb

On the other hand there is no:

  • nvidia-cuda
  • nvidia-cudnn
  • nvidia-tensorrt
  • nvidia-jetpack-runtime

There is latest version of nvidia-container-runtime installed from https://nvidia.github.io/libnvidia-container/ubuntu20.04/libnvidia-container.list.

@junshengy So could you suggest any way to run deepstream image without the host dependencies?

Since many plugins of DeepStream depend on BSP. There may be no way to remove the host dependency.

As some tips, you can refer to the following file and try to copy the corresponding library from host to docker.

I’m not sure whether this method is definitely feasible. /etc/nvidia-container-runtime/host-files-for-container.d/l4t.csv

1 Like

By the way, are you using the jetson provided by a third party?

You can consult the equipment provider

So, it turned to be the same issue:

Solved by adding echo "/usr/lib/aarch64-linux-gnu/tegra-egl" > /etc/ld.so.conf.d/nvidia-tegra-egl.conf && ldconfig.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.