Hello,
I’m developing a multi-arch deepstream application based on deepstream-app sources. On amd64 this application works fine and also builds from a Dockerfile. Next step would be porting the app to run it on the Jetson and then create a mutliarch-Dockerfile.
I recently upgraded to Jetpack 5.0.1 DP and now I’m trying to build the sample app using the container from nvcr.io/nvidia/deepstream-l4t:6.1-samples.
Running the container with a command like:
sudo docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY
-w /opt/nvidia/deepstream/deepstream-6.1 -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:6.1-samples /bin/bash
leads to missing dependencies while compiling deepstream-app:
../../apps-common/src/deepstream_source_bin.c:32:10: fatal error: cuda_runtime_api.h: No such file or directory
32 | #include <cuda_runtime_api.h>
After researching this problem I found out that “a few” dependencies need to be included to fix this (unfortunately i can’t find the source anymore), so a successful command to build deepstream-app looks like:
sudo docker run -it --rm --net=host --runtime nvidia \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix/:/tmp/.X11-unix \
-v /usr/include/aarch64-linux-gnu/NvCaffeParser.h:/usr/include/aarch64-linux-gnu/NvCaffeParser.h \
-v /usr/include/aarch64-linux-gnu/NvInferPlugin.h:/usr/include/aarch64-linux-gnu/NvInferPlugin.h \
-v /usr/include/aarch64-linux-gnu/NvOnnxConfig.h:/usr/include/aarch64-linux-gnu/NvOnnxConfig.h \
-v /usr/include/aarch64-linux-gnu/NvInferConsistency.h:/usr/include/aarch64-linux-gnu/NvInferConsistency.h \
-v /usr/include/aarch64-linux-gnu/NvInferPluginUtils.h:/usr/include/aarch64-linux-gnu/NvInferPluginUtils.h \
-v /usr/include/aarch64-linux-gnu/NvOnnxParser.h:/usr/include/aarch64-linux-gnu/NvOnnxParser.h \
-v /usr/include/aarch64-linux-gnu/NvInferConsistencyImpl.h:/usr/include/aarch64-linux-gnu/NvInferConsistencyImpl.h \
-v /usr/include/aarch64-linux-gnu/NvInferRuntimeCommon.h:/usr/include/aarch64-linux-gnu/NvInferRuntimeCommon.h \
-v /usr/include/aarch64-linux-gnu/NvUffParser.h:/usr/include/aarch64-linux-gnu/NvUffParser.h \
-v /usr/include/aarch64-linux-gnu/NvInfer.h:/usr/include/aarch64-linux-gnu/NvInfer.h \
-v /usr/include/aarch64-linux-gnu/NvInferRuntime.h:/usr/include/aarch64-linux-gnu/NvInferRuntime.h \
-v /usr/include/aarch64-linux-gnu/NvUtils.h:/usr/include/aarch64-linux-gnu/NvUtils.h \
-v /usr/include/aarch64-linux-gnu/NvInferImpl.h:/usr/include/aarch64-linux-gnu/NvInferImpl.h \
-v /usr/include/aarch64-linux-gnu/NvInferSafeRuntime.h:/usr/include/aarch64-linux-gnu/NvInferSafeRuntime.h \
-v /usr/include/aarch64-linux-gnu/NvInferLegacyDims.h:/usr/include/aarch64-linux-gnu/NvInferLegacyDims.h \
-v /usr/include/aarch64-linux-gnu/NvInferVersion.h:/usr/include/aarch64-linux-gnu/NvInferVersion.h \
-v /usr/local/cuda/bin/nvcc:/usr/local/cuda/bin/nvcc \
-v /usr/local/cuda/include:/usr/local/cuda/include \
-w /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-app \
nvcr.io/nvidia/deepstream-l4t:6.1-samples \
/bin/bash -c "apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev libgstrtspserver-1.0-dev libx11-dev libjson-glib-dev libyaml-cpp-dev build-essential -y && CUDA_VER=11.4 make -j6"
First question here: According to my knowledge since version 5.0 DP all dependencies like deepstream etc should be included in the container (that’s why deepstream-l4t:6.1-samples is 3.27gb vs 1gb for deepstream-l4t:6.0.1-samples?). Why do I still need to include a lot of dependencies from the local jetpack installation into the container?
Using the information from above I am now trying to build the app using a Dockerfile just like on amd64. So I first need to copy all these dependencies from Jetpack into the container. After that it is possible to build the app without these 18 include lines using docker run. However, a build via Dockerfile still fails with the following message:
/usr/bin/ld: cannot find -lcuda
This error might be due to the nvidia runtime not being supported in docker build. So i tried to reproduce this in docker run by removing --runtime nvidia from the command. And this finally leads to the same error. This means that the sample code cannot be built without the nvidia runtime and mutli-arch builds of containers on amd64 are also impossible.
Is there a reason why libraries are not available in the container until the nvidia runtime is running?
Is there a workaround for this problem? Maybe with further copying of dependencies?
Thanks a lot!