Missing dependencies while building deepstream-app using DeepStream-l4t:6.1-samples


I’m developing a multi-arch deepstream application based on deepstream-app sources. On amd64 this application works fine and also builds from a Dockerfile. Next step would be porting the app to run it on the Jetson and then create a mutliarch-Dockerfile.

I recently upgraded to Jetpack 5.0.1 DP and now I’m trying to build the sample app using the container from nvcr.io/nvidia/deepstream-l4t:6.1-samples.

Running the container with a command like:

sudo docker run -it --rm --net=host --runtime nvidia  -e DISPLAY=$DISPLAY 
-w /opt/nvidia/deepstream/deepstream-6.1 -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:6.1-samples /bin/bash

leads to missing dependencies while compiling deepstream-app:

../../apps-common/src/deepstream_source_bin.c:32:10: fatal error: cuda_runtime_api.h: No such file or directory
32 | #include <cuda_runtime_api.h>

After researching this problem I found out that “a few” dependencies need to be included to fix this (unfortunately i can’t find the source anymore), so a successful command to build deepstream-app looks like:

sudo docker run -it --rm --net=host  --runtime nvidia \
-v /tmp/.X11-unix/:/tmp/.X11-unix \
-v /usr/include/aarch64-linux-gnu/NvCaffeParser.h:/usr/include/aarch64-linux-gnu/NvCaffeParser.h \
-v /usr/include/aarch64-linux-gnu/NvInferPlugin.h:/usr/include/aarch64-linux-gnu/NvInferPlugin.h \
-v /usr/include/aarch64-linux-gnu/NvOnnxConfig.h:/usr/include/aarch64-linux-gnu/NvOnnxConfig.h \
-v /usr/include/aarch64-linux-gnu/NvInferConsistency.h:/usr/include/aarch64-linux-gnu/NvInferConsistency.h \
-v /usr/include/aarch64-linux-gnu/NvInferPluginUtils.h:/usr/include/aarch64-linux-gnu/NvInferPluginUtils.h \
-v /usr/include/aarch64-linux-gnu/NvOnnxParser.h:/usr/include/aarch64-linux-gnu/NvOnnxParser.h \
-v /usr/include/aarch64-linux-gnu/NvInferConsistencyImpl.h:/usr/include/aarch64-linux-gnu/NvInferConsistencyImpl.h \
-v /usr/include/aarch64-linux-gnu/NvInferRuntimeCommon.h:/usr/include/aarch64-linux-gnu/NvInferRuntimeCommon.h \
-v /usr/include/aarch64-linux-gnu/NvUffParser.h:/usr/include/aarch64-linux-gnu/NvUffParser.h \
-v /usr/include/aarch64-linux-gnu/NvInfer.h:/usr/include/aarch64-linux-gnu/NvInfer.h \
-v /usr/include/aarch64-linux-gnu/NvInferRuntime.h:/usr/include/aarch64-linux-gnu/NvInferRuntime.h \
-v /usr/include/aarch64-linux-gnu/NvUtils.h:/usr/include/aarch64-linux-gnu/NvUtils.h \
-v /usr/include/aarch64-linux-gnu/NvInferImpl.h:/usr/include/aarch64-linux-gnu/NvInferImpl.h \
-v /usr/include/aarch64-linux-gnu/NvInferSafeRuntime.h:/usr/include/aarch64-linux-gnu/NvInferSafeRuntime.h \
-v /usr/include/aarch64-linux-gnu/NvInferLegacyDims.h:/usr/include/aarch64-linux-gnu/NvInferLegacyDims.h \
-v /usr/include/aarch64-linux-gnu/NvInferVersion.h:/usr/include/aarch64-linux-gnu/NvInferVersion.h \
-v /usr/local/cuda/bin/nvcc:/usr/local/cuda/bin/nvcc \
-v /usr/local/cuda/include:/usr/local/cuda/include \
-w /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-app \
nvcr.io/nvidia/deepstream-l4t:6.1-samples  \
/bin/bash -c "apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev libgstrtspserver-1.0-dev libx11-dev libjson-glib-dev libyaml-cpp-dev build-essential -y && CUDA_VER=11.4 make -j6"

First question here: According to my knowledge since version 5.0 DP all dependencies like deepstream etc should be included in the container (that’s why deepstream-l4t:6.1-samples is 3.27gb vs 1gb for deepstream-l4t:6.0.1-samples?). Why do I still need to include a lot of dependencies from the local jetpack installation into the container?

Using the information from above I am now trying to build the app using a Dockerfile just like on amd64. So I first need to copy all these dependencies from Jetpack into the container. After that it is possible to build the app without these 18 include lines using docker run. However, a build via Dockerfile still fails with the following message:
/usr/bin/ld: cannot find -lcuda

This error might be due to the nvidia runtime not being supported in docker build. So i tried to reproduce this in docker run by removing --runtime nvidia from the command. And this finally leads to the same error. This means that the sample code cannot be built without the nvidia runtime and mutli-arch builds of containers on amd64 are also impossible.

Is there a reason why libraries are not available in the container until the nvidia runtime is running?

Is there a workaround for this problem? Maybe with further copying of dependencies?

Thanks a lot!

are you install TensorRT and CUDA ? if not, please install it

No, according to the Deepstream-l4t container documentaion this should not be needed because:

The DeepStream base container contains the plugins and libraries that are part of the DeepStream SDK along with dependencies such as CUDA, TensorRT, GStreamer, etc.


Since Jetpack 5.0.1 DP, NVIDIA Container Runtime no longer mounts user level libraries like CUDA, cuDNN and TensorRT from the host. These will instead be installed inside the containers.

I also found the source for these missing header files. According to this, these files were missed while building the container, so this should be fixed in the next release?


The Jetson Docker containers are for deployment only. They do not support DeepStream software development within a container. You can build applications natively on the Jetson target and create containers for them by adding binaries to your docker images. Alternatively, you can generate Jetson containers from your workstation using instructions in the Building Jetson Containers on an x86 Workstation section in the NVIDIA Container Runtime for Jetson documentation. or you can refer to this, Building apps inside L4T DS Triton docker: Docker Containers — DeepStream 6.1.1 Release documentation

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.

I am wondering about this information because in the previous jetpack/container release (deepstream-l4t:6.0.1-samples)
Deepstream development was supported without any issues. Also building Jetson containers on x86 workstations via Dockerfiles was no problem before. However, this is not possible any more since dependencies are now missing in the container while building. Also the 6.1-triton tag with included header files does not solve this problem.

This is still an issue for me because i have to change the whole development workflow and there is no easy way for automatic builds e.g. via github workflows anymore.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.

DS 6.1.1 triton docker will have dependency, and allow development. other dockers will not have it. is it good enough to provide in triton docker(not in samples)?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.