Failure In Building Custom Deepstream Application Via Docker Build

Please provide complete information as applicable to your setup.

• Hardware Platform : Jetson
• DeepStream Version : 6.1
• JetPack Version (valid for Jetson only) : 5.1
• TensorRT Version : 8.4.1
• Issue Type( questions, new requirements, bugs) : Bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Using nvcr.io/nvidia/l4t-tensorrt:r8.4.1.5-devel as base docker I installed all the dependencies and Deepstream 6.1.0 required for my custom DS application and created a custom docker ( for simplicity name it as example_docker) . I was able to successfully build my custom DS application when running the example_docker in iterative mode.

However, when I added the steps to build the custom DS application within the Dockerfile that I used to create example_docker and trying to build the custom DS application during the docker build build, I am facing the following issues during the docker build process.

/usr/bin/ld: cannot find -lnvbufsurface
/usr/bin/ld: cannot find -lnvbufsurftransform
collect2: error: ld returned 1 exit status

I tried following to set the env variable before the building the custom DS application with in the Dockerfile. Nothing seems to be working.

ENV LD_LIBRARY_PATH=/usr/lib/aarch64-linux-gnu/tegra:$LD_LIBRARY_PATH

With in the example_docker the libnvbufsurface and libnvbufsurfacetransform libraries are located at /usr/lib/aarch64-linux-gnu/tegra

root@user:/home# ls /usr/lib/aarch64-linux-gnu/tegra/libnvbuf*
libnvbuf_fdmap.so.1.0.0         libnvbuf_utils.so.1.0.0         libnvbufsurface.so.1.0.0        libnvbufsurftransform.so.1.0.0  
libnvbuf_utils.so               libnvbufsurface.so              libnvbufsurftransform.so

Any thoughts on how to build the custom DS application while building the example_docker itself instead running the example_docker in iterative mode and then building it.

Thanks,
Ganesh

There is DeepStream docker for Jetson already: DeepStream-l4t | NVIDIA NGC, can it be used as the base docker?

I started experimented with that docker only. The problem with that docker is it doesn’t have some cuda headers which are required to build my custom DS application. So I used nvcr.io/nvidia/l4t-tensorrt:r8.4.1.5-devel as a base docker as it has cuda headers which are required by my application.

Because some libraries is shared between host and docker on Jetson. which includes libnvbufsurface and libnvbufsurftransform. And these libraries are mapped to docker at runtime.

This is why it can build it successful in iterative mode.

You can try add

RUN ln -s /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurface.so.1.0.0 /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurface.so

RUN ln -s /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurftransform.so.1.0.0  /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurftransform.so

in your dockerfile

You can also refer to this project

When I added those linking statements in the dockerfile I get the following error.

Step 5/7 : RUN ln -s /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurface.so.1.0.0 /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurface.so
 ---> Running in e59856dfa8af
ln: failed to create symbolic link '/usr/lib/aarch64-linux-gnu/tegra/libnvbufsurface.so': No such file or directory

When I run the docker in iterative mode I can find it

root@user:/home# ls /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurface.so
/usr/lib/aarch64-linux-gnu/tegra/libnvbufsurface.so

There is a workaround.

In your Dockerfile add COPY commmand. copy this so from host first, then build your app

This COPY command worked. Thanks.
I also faced similar issue when creating a custom DS docker using this approach. Should I also use COPY instead of linking in this approach as well ?

It’s will work, but will cause bigger size of the docker image.

At the same time, if you upgrade the driver of the host, container may be affected.

I suggest that build the application in the host and then copy the binary to constainer instead of compiling during the docker build

Thanks

I was able to successfully build my custom DS application on example_dockerand run it successfully on my device ( with DS-6.1.0). The docker was built on my device. When I pulled this docker on other device with (DS-6.0.1) I see the following error ?

Can we run the dockers built on one device and run on other device with different DS version ? or
Is it due to the copy operation we did it during the docker build process ?

It looks like there isn’t a display on your device or docker doesn’t have a DISPLAY environment variable.

Please start a new topic instead.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.