Provide tooling for compiling apps in Docker

Hi,

We are facing a lot of issues in compiling our DeepStream apps in our CI/CD pipeline running in Docker. Right now, the NVIDIA-provided DeepStream images on NGC only supports runtime, and same with the l4t-base.

When compiling apps, we need the CUDA tooling as well as other files.

These dependencies are really difficult to find and install.

It is not a viable solution to build the files on the Jetson and copy to a Docker image. We need to build the final images in a multi-tage Docker build.

Have anyone gotten closer to compiling DeepStream pipelines in Docker?

Hi,
I didn’t get it, could you elaborate it or give an exmaple?

As you mentioned you can’t compile DS apps, I tried it on my side as below steps:

  1. Pull down DS docker/x86 from DeepStream | NVIDIA NGC
    $ docker pull nvcr.io/nvidia/deepstream:4.0.2-19.12-devel
  2. start the docker
    $ nvidia-docker run -it --net=host --ipc=host --publish 0.0.0.0:6006:6006 -v /home/$user/:/home/$user/ --rm nvcr.io/nvidia/deepstream:4.0.2-19.12-devel
  3. build the ds sample in docker

cd /root/deepstream_sdk_v4.0.2_x86_64/sources/apps/sample_apps/deepstream-app

make

the sample is built successfully.

And, in this docker, there is CUDA installed under /usr/local/cuda-10.1/ .

Hi mchi,

Sorry - my initial write-up were a bit sparse.

This is not x86, but the Jetson image. We need to build our Docker images on Jetson / ARM.

I know it works on x86. It should be the same on Jetson!

Hi,
Please do elaborate what you did and what failure you run into to accelerate addressing your question.

I believe, if you use the docker we provided in DeepStream-l4t | NVIDIA NGC, and follow the guidance in the page, there is CUDA toolkit in the docker.

Hi mchi,

Again, it is not possible to build images off your deepstream-l4t image, compiling applications.

It is further stated as a disclaimer on NGC: Supports deployment only: The deepstream container for Jetson is indented to be a deployment container and is not setup for building sources.

Example in the container:

root@nvidia-desktop:~/deepstream_sdk_v4.0.2_jetson/sources/apps/sample_apps/deepstream-app# make
/bin/sh: 1: gcc: not found
cc -c -o deepstream_app.o -I../../apps-common/includes -I../../../includes -DDS_VERSION_MINOR=0 -DDS_VERSION_MAJOR=4 `pkg-config --cflags gstreamer-1.0 gstreamer-video-1.0 x11` deepstream_app.c
Package gstreamer-1.0 was not found in the pkg-config search path.
Perhaps you should add the directory containing `gstreamer-1.0.pc'
to the PKG_CONFIG_PATH environment variable
No package 'gstreamer-1.0' found
Package gstreamer-video-1.0 was not found in the pkg-config search path.
Perhaps you should add the directory containing `gstreamer-video-1.0.pc'
to the PKG_CONFIG_PATH environment variable
No package 'gstreamer-video-1.0' found
/bin/sh: 1: cc: not found
Makefile:56: recipe for target 'deepstream_app.o' failed
make: *** [deepstream_app.o] Error 127

Thanks, got your point!
Yes, as you pointed our, the docker is for deployment.
If you want to compiple application in the docker, for exmaple, compiling “~/deepstream_sdk_v4.0.2_jetson/sources/apps/sample_apps/deepstream-app” in the docker,
when you meet the “gcc: not found”, you could just install it “apt-get install gcc”.
For other lib requirements, you can refer to the README in this sample (pasted below)


Follow these procedures to use the deepstream-app application for native
compilation.

You must have the following development packages installed

GStreamer-1.0
GStreamer-1.0 Base Plugins
GStreamer-1.0 gstrtspserver
X11 client-side library
  1. To install these packages, execute the following command:
    sudo apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev
    libgstrtspserver-1.0-dev libx11-dev

  2. Build the sources by executing the command:
    make

Hi mchi,

As I previously stated, this works fine when running the commands manually with docker run, but not when building images.

The problem is, that with docker run, the nvidia runtime mounts several things into the container that is needed for compilation. This is not mounted with docker build.

Try this Dockerfile:

FROM nvcr.io/nvidia/deepstream-l4t:4.0.2-19.12-samples

RUN apt-get update && apt-get install -y gcc libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev libgstrtspserver-1.0-dev libx11-dev

WORKDIR /root/deepstream_sdk_v4.0.2_jetson/sources/apps/sample_apps/deepstream-app

RUN make
docker build .

When running it, you’ll see the following error:

cc -o deepstream-app deepstream_app.o deepstream_app_main.o deepstream_app_config_parser.o ../../apps-common/src/deepstream_tracker_bin.o ../../apps-common/src/deepstream_primary_gie_bin.o ../../apps-common/src/deepstream_source_bin.o ../../apps-common/src/deepstream_config_file_parser.o ../../apps-common/src/deepstream_common.o ../../apps-common/src/deepstream_sink_bin.o ../../apps-common/src/deepstream_perf.o ../../apps-common/src/deepstream_dewarper_bin.o ../../apps-common/src/deepstream_dsexample.o ../../apps-common/src/deepstream_secondary_gie_bin.o ../../apps-common/src/deepstream_tiled_display_bin.o ../../apps-common/src/deepstream_osd_bin.o ../../apps-common/src/deepstream_streammux.o -L/opt/nvidia/deepstream/deepstream-4.0/lib/ -lnvdsgst_meta -lnvds_meta -lnvdsgst_helper -lnvds_utils -lm -lgstrtspserver-1.0 -lgstrtp-1.0 -Wl,-rpath,/opt/nvidia/deepstream/deepstream-4.0/lib/ `pkg-config --libs gstreamer-1.0 gstreamer-video-1.0 x11`
/usr/bin/ld: warning: libnvinfer.so.6, needed by /opt/nvidia/deepstream/deepstream-4.0/lib//libnvds_utils.so, not found (try using -rpath or -rpath-link)
/opt/nvidia/deepstream/deepstream-4.0/lib//libnvds_utils.so: undefined reference to `getInferLibVersion'
collect2: error: ld returned 1 exit status
Makefile:59: recipe for target 'deepstream-app' failed
make: *** [deepstream-app] Error 1
The command '/bin/sh -c make' returned a non-zero code: 2

However, running the same commands in a container started with docker run, it works fine.

It is critical, that we cannot create new images with our application without first building them on the Jetson itself, and then copying them to a new image.

Also, critical header files like NvCaffeParser.h is not in the l4t-base nor deepstream-l4t images.

As I previously stated, this works fine when running the commands manually with docker run, but not when building images.
At fisrt, you said “Have anyone gotten closer to compiling DeepStream pipelines in Docker?”, I don’t think you mean build docker image. And, I asked you elaborate what you did and what failure you run into. It did waste time!

Anyway, here are the solution:

modify /etc/docker/daemon.json as below

{
“runtimes”: {
“nvidia”: {
“path”: “nvidia-container-runtime”,
“runtimeArgs”:
}
},
“default-runtime”: “nvidia”
}

Hi mchi,

Sorry, I thought it was clear from my first post (“It is not a viable solution to build the files on the Jetson and copy to a Docker image. We need to build the final images in a multi-tage Docker build.”).

Seems like your default-runtime change solves some of it (mainly the missing CUDA stuff), but there is still libraries and header files missing from the l4t-base/deepstream-l4t images.

For example NvInfer.h / NvCaffeParser.h / NvOnnxParser.h / NvUffParser.h. They are located on the Jetson in /usr/include/aarch64-linux-gnu/, but is not available in the Docker images you provide.

Those are needed for compiling applications as well.

Problem with the libraries is that on the l4t-base image, they are not symlinked. Only the versioned libraries are there. E.g. libnvparsers.so.6. On the Jetson, these are symlinked to libnvparsers.so.

See here:

l4t-base

ldconfig -p | grep nvparsers
	libnvparsers.so.6 (libc6,AArch64) => /usr/lib/aarch64-linux-gnu/libnvparsers.so.6

Jetson

ldconfig -p | grep nvparsers
	libnvparsers.so.6 (libc6,AArch64) => /usr/lib/aarch64-linux-gnu/libnvparsers.so.6
	libnvparsers.so (libc6,AArch64) => /usr/lib/aarch64-linux-gnu/libnvparsers.so

What do you suggest? It’s not really viable to manually copy the header files to the build container, which is the only way I can think of because you don’t provide packages for this on Jetson. And then create the symlinks manually in the image.

Okay, agree with you! This docker is maily for deployment, so there are lots of missed files for compiling.

But, may I know why you want to compile application in docker since the system in Jetson already provides the while compile environment.

Thanks!

Hi mchi,

I got it working with a few hacks and it’s working fine.

The workflow you describe might work for a single developer/hobbyist, but having a large team developing solutions and products for the Jetson ecosystem, this is not enough. And it is not ready for production.

“Production ready” in my opinion is having a setup ready for full continuous integration. And most CI pipelines runs in Docker to keep builds and dependencies isolated from each other. We are building and testing every commit and pull request coming in (we have set up Jetson Xavier’s in our CI pipeline to run this).

So, to automate things in a good way, this is the way to go.

1 Like

Elias,

Could you be a bit more specific about the hacks you used to get the build working? My team is running into some of the same problems and we’d really appreciate some assistance.

– Harris

Hi Harris,

Right now, we’re doing the following:

Use nvidia runtime in Docker builds. When doing “docker run …” you can define the runtime, but not in “docker build”.

$ sudo vi /etc/docker/daemon.json
{
    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "default-runtime": "nvidia"
}

$ sudo service docker restart
$ docker build ..........

Then we base our Dockerfiles off l4t-base (from NGC). However, not all libraries have the correct symlinks, as they do on the normal Jetson file system.

For example, if you look at /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.6 on your Jetson, it will be symlinked to /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so, but not in the l4t-base image.

Right now, we have success by only symlinking these three files:

ENV NVLIB_VER=6

RUN ln -s /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.${NVLIB_VER} /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so

RUN ln -s /usr/lib/aarch64-linux-gnu/libnvinfer.so.${NVLIB_VER} /usr/lib/aarch64-linux-gnu/libnvinfer.so

RUN ln -s /usr/lib/aarch64-linux-gnu/libnvparsers.so.${NVLIB_VER} /usr/lib/aarch64-linux-gnu/libnvparsers.so

And then of course you need the dependencies needed for your application. We do it in a multi stage build, and then copying all binaries to the final image without the dev packages in the end.

When building base images with the NVIDIA SDKs for a CI/CD pipeline, I found this github repo really useful:

idavis/jetson-containers

Just build the base images on your jetson device then push it to your own container repo, and use them as base for your own builds.

Elias, eskild,

Thank you both for the guidance. These are very helpful resources.

– Harris