Thanks, after installing the DLA compiler package you pointed to, I can now successfully link the executable on the host.
Now remains the issue of automating the build from within docker. The command with added debug prints looks like the following:
RUN mkdir -p /tmp/Yolo-V11-cpp-TensorRT/build \
&& cd /tmp/Yolo-V11-cpp-TensorRT/build \
&& cmake .. \
&& echo "!!!!! LD_LIBRARY_PATH=${LD_LIBRARY_PATH} !!!!!" \
&& env \
&& ldd /usr/local/cuda/compat/libcuda.so \
&& echo "@@@@@ " && find /usr -name libnvrm_gpu.so -o -name libnvrm_mem.so && echo "%%%%%"\
&& make -j$(nproc)
and output is:
5.043 !!!!! LD_LIBRARY_PATH=/usr/local/cuda/compat:/usr/local/cuda/lib64:/usr/lib/aarch64-linux-gnu:/usr/lib/aarch64-linux-gnu/nvidia:/usr/local/cuda/lib64: !!!!!
5.045 NVIDIA_VISIBLE_DEVICES=all
5.045 CMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc
5.045 CUDNN_LIB_INCLUDE_PATH=/usr/include
5.045 PWD=/tmp/Yolo-V11-cpp-TensorRT/build
5.045 NVIDIA_DRIVER_CAPABILITIES=all
5.045 CUDA_BIN_PATH=/usr/local/cuda/bin
5.045 HOME=/root
5.045 CUDACXX=/usr/local/cuda/bin/nvcc
5.045 CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda
5.045 OPENCV_VERSION=4.11.0
5.045 NVCC_PATH=/usr/local/cuda/bin/nvcc
5.045 SHLVL=1
5.045 CUDA_NVCC_EXECUTABLE=/usr/local/cuda/bin/nvcc
5.045 CUDNN_LIB_PATH=/usr/lib/aarch64-linux-gnu
5.045 LD_LIBRARY_PATH=/usr/local/cuda/compat:/usr/local/cuda/lib64:/usr/lib/aarch64-linux-gnu:/usr/lib/aarch64-linux-gnu/nvidia:/usr/local/cuda/lib64:
5.045 CUDA_HOME=/usr/local/cuda
5.045 PATH=/usr/local/cuda/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
5.045 DEBIAN_FRONTEND=noninteractive
5.045 _=/usr/bin/env
5.045 OLDPWD=/tmp
5.054 linux-vdso.so.1 (0x0000ffff97e55000)
5.054 libm.so.6 => /usr/lib/aarch64-linux-gnu/libm.so.6 (0x0000ffff95510000)
5.054 libc.so.6 => /usr/lib/aarch64-linux-gnu/libc.so.6 (0x0000ffff95360000)
5.054 /lib/ld-linux-aarch64.so.1 (0x0000ffff97e1c000)
5.054 libdl.so.2 => /usr/lib/aarch64-linux-gnu/libdl.so.2 (0x0000ffff95340000)
5.054 librt.so.1 => /usr/lib/aarch64-linux-gnu/librt.so.1 (0x0000ffff95320000)
5.054 libpthread.so.0 => /usr/lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffff95300000)
5.054 libnvrm_gpu.so => not found
5.054 libnvrm_mem.so => not found
5.055 @@@@@
5.252 %%%%%
so LD_LIBRARY_PATH is set properly, but ldd /usr/local/cuda/compat/libcuda.so reports missing libraries libnvrm_gpu.so and libnvrm_mem.so which find command indeed can’t find.
Yet, after removing the makeand docker build completes, I can run the container with `dockand all these libraries can be found:
root@jetson:/tmp# ldd /usr/local/cuda/compat/libcuda.so
linux-vdso.so.1 (0x0000ffff8f9e0000)
libm.so.6 => /usr/lib/aarch64-linux-gnu/libm.so.6 (0x0000ffff8d0a0000)
libc.so.6 => /usr/lib/aarch64-linux-gnu/libc.so.6 (0x0000ffff8cef0000)
/lib/ld-linux-aarch64.so.1 (0x0000ffff8f9a7000)
libdl.so.2 => /usr/lib/aarch64-linux-gnu/libdl.so.2 (0x0000ffff8ced0000)
librt.so.1 => /usr/lib/aarch64-linux-gnu/librt.so.1 (0x0000ffff8ceb0000)
libpthread.so.0 => /usr/lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffff8ce90000)
libnvrm_gpu.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so (0x0000ffff8ce10000)
libnvrm_mem.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_mem.so (0x0000ffff8cdf0000)
libnvos.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvos.so (0x0000ffff8cdc0000)
libnvsocsys.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvsocsys.so (0x0000ffff8cda0000)
libnvtegrahv.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvtegrahv.so (0x0000ffff8cd80000)
libnvrm_sync.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_sync.so (0x0000ffff8cd60000)
libnvsciipc.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvsciipc.so (0x0000ffff8cd20000)
libnvrm_chip.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_chip.so (0x0000ffff8cd00000)
libnvrm_host1x.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_host1x.so (0x0000ffff8ccd0000)
root@jetson:/tmp# echo "@@@@@ " && find /usr -name libnvrm_gpu.so -o -name libnvrm_mem.so && echo "%%%%%"
@@@@@
/usr/lib/aarch64-linux-gnu/nvidia/libnvrm_mem.so
/usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so
%%%%%
If I do no specify --gpus all --runtime nvidia in the docker run ... command, the libraries are indeed missing.
How shall I get my tensorRT application link from a docker build command?