Error building Python bindings in deepstream-l4t container on Jetson

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson Xavier NX
• DeepStream Version
6.0
• JetPack Version (valid for Jetson only)
32.6.1
• Issue Type( questions, new requirements, bugs)
I’m trying to deploy our Deepstream Python app in a Docker container. I’m using the “nvcr.io/nvidia/deepstream-l4t:6.0-samples” container as a base, but nvcr.io/nvidia/deepstream-l4t:6.0-base would work just as well, I suppose. While trying to build the bindings, I’m running into a problem. Deepstream library is not found.

-- Detecting CXX compile features - done
CMake Error at CMakeLists.txt:93 (message):
  Missing libnvbufsurface.so at
  /opt/nvidia/deepstream/deepstream-6.0/lib/libnvbufsurface.so

  please make sure that deepstream is installed
Call Stack (most recent call first):
  CMakeLists.txt:103 (add_ds_lib)


-- Configuring incomplete, errors occurred!

I’m using this in my Dockerfile:

# Deepstream Python Bindings
RUN apt install -y \
    python-dev \
    python3.6-dev \
    g++ \
    libglib2.0-dev \
    libglib2.0-dev-bin \
    python-gi-dev \
    libtool \
    m4 \
    autoconf \
    automake && \
    cd /tmp && \
    git clone https://github.com/NVIDIA-AI-IOT/deepstream_python_apps.git && \
    cd deepstream_python_apps && \
    git submodule update --init && \
    apt-get install --reinstall ca-certificates && \
    cd 3rdparty/gst-python/ && \
    ./autogen.sh && \
    make && \
    make install && \
    cd /tmp/deepstream_python_apps/bindings/ && \
    mkdir build && \
    cd build && \
    cmake .. && \
    make && \
    pip3 install ./pyds*.whl

Which I somewhat distilled from the README in the Github repo. I’m clearly doing something wrong. I found the library it’s asking for in

/usr/lib/aarch64-linux-gnu/tegra/

But when I try to copy or simlink them, Docker errors the build process saying it can’t find the files and directories? I’m pulling my hair out by this stage because when I just run the original container the files are right there…

Does anyone have a clearcut manual to build/install the Python bindings in a container? I’m doing a aarch64 container right now but in the near future I need to deploy this very same app in a x86 container on dGPU as well. So any solution does need to be multiplatform (maybe using a different Dockerfile, of course).

In the container:

root@15357a483294:/opt/nvidia/deepstream/deepstream/lib# ll | grep libnvbuf
lrwxrwxrwx 1 root root      51 Oct  6 04:32 libnvbufsurface.so -> /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurface.so
lrwxrwxrwx 1 root root      57 Oct  6 04:32 libnvbufsurftransform.so -> /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurftransform.so

root@15357a483294:/usr/lib/aarch64-linux-gnu/tegra# ll | grep libnvbuf
-rw-r--r-- 1 root root     9952 Sep 17 04:09 libnvbuf_fdmap.so.1.0.0
lrwxrwxrwx 1 root root       23 Dec  2 10:01 libnvbuf_utils.so -> libnvbuf_utils.so.1.0.0
-rw-r--r-- 1 root root    45136 Sep 17 04:09 libnvbuf_utils.so.1.0.0
lrwxrwxrwx 1 root root       24 Dec  2 10:01 libnvbufsurface.so -> libnvbufsurface.so.1.0.0
-rw-r--r-- 1 root root   446160 Sep 17 04:09 libnvbufsurface.so.1.0.0
lrwxrwxrwx 1 root root       30 Dec  2 10:01 libnvbufsurftransform.so -> libnvbufsurftransform.so.1.0.0
-rw-r--r-- 1 root root 32793112 Sep 17 04:09 libnvbufsurftransform.so.1.0.0

Hi,

I have exactly the same problem. When I try to build, when I cmake with:

RUN cmake … -DPYTHON_MAJOR_VERSION=3 -DPYTHON_MINOR_VERSION=6 -DPIP_PLATFORM=linux_aarch64 -DDS_PATH=/opt/nvidia/deepstream/deepstream-6.0/

as shown in this repo, deepstream is not found in my system and this message is displayed:

CMake Error at CMakeLists.txt:93 (message):
Missing libnvbufsurface.so at
/opt/nvidia/deepstream/deepstream-6.0//lib/libnvbufsurface.so
please make sure that deepstream is installed

However, if I enter in nvcr.io/nvidia/deepstream-l4t:6.0-samples container in -it mode and if I try to build the binaries, it can be done properly. So I guess the problem is in the docker build process. Docker is not finding deepstream when building container.

Maybe the problem can be related to docker environment variables.

Lets see if somebody can help us,
Best regards

I have the same problem as well. Its reason is that /opt/nvidia/deepstream/deepstream/lib/libnvbufsurface.so is a symbol link which points to /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurface.so which isn’t available in this container.

Probably, it’s because NVidia docker runtime maps it from the host machine inside the container when running on Jetson, which is unfortunately not the case when you run this container locally.

My workaround is pre-building this Python package and throwing it into my Docker container as an artifact.

@otank , with My workaround is pre-building this Python package and throwing it into my Docker container as an artifact. do you mean that you copy the wheel and do a pip install?

I feel stupid for not thinking of this myself… haha. Thanks! That’s a great idea. Shaves off some build time in the container as well.

Yes, this is exactly how I build the container with Deepstream pipeline. I’m not completely happy about this workaround because of library updates and so on, but at least it works.

A more neat approach could be installing the corresponding apt package in the container as a real Jetson device does. It’s called nvidia-l4t-multimedia, and it’s available in NVIDIA repositories, which you may need to add first (apt-add-repository) with their gpg keys (apt-key add). But I’ve not tried this yet.

And please note, to compile the bindings you may need another two apt packages: libgstreamer1.0-dev and libgstreamer-plugins-base1.0-dev.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.