Libnvdla_compiler.so not found

Hi,

I’m trying to run GitHub - hamdiboukamcha/Yolo-V11-cpp-TensorRT: The YOLOv11 C++ TensorRT Project in C++ and optimized using NVIDIA TensorRT and after some minor code changes get to link stage but it errors out because it can’t find libnvdla_compiler.so:

...
[100%] Linking CXX executable YOLOv11TRT
/usr/bin/ld: warning: libnvdla_compiler.so, needed by /usr/lib/gcc/aarch64-linux-gnu/11/../../../aarch64-linux-gnu/libnvinfer.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: /usr/lib/gcc/aarch64-linux-gnu/11/../../../aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::INetwork::addSlice(nvdla::ITensor*, nvdla::Weights, nvdla::Weights, nvdla::Weights, nvdla::Weights, nvdla::SliceLayerMode)'
/usr/bin/ld: /usr/lib/gcc/aarch64-linux-gnu/11/../../../aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::INetwork::addReshape(nvdla::ITensor*, nvdla::Dims4)'
/usr/bin/ld: /usr/lib/gcc/aarch64-linux-gnu/11/../../../aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::IProfile::setCanCompressStructuredSparseWeights(bool)'
/usr/bin/ld: /usr/lib/gcc/aarch64-linux-gnu/11/../../../aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::IProfile::setCanGenerateDetailedLayerwiseStats(bool)'
/usr/bin/ld: /usr/lib/gcc/aarch64-linux-gnu/11/../../../aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::createNetwork()'
/usr/bin/ld: /usr/lib/gcc/aarch64-linux-gnu/11/../../../aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::IProfile::setCanGenerateLayerwiseStats(bool)'
/usr/bin/ld: /usr/lib/gcc/aarch64-linux-gnu/11/../../../aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::INetwork::addResize(nvdla::ITensor*, nvdla::ResizeMode, nvdla::Weights)'
/usr/bin/ld: /usr/lib/gcc/aarch64-linux-gnu/11/../../../aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::ISoftMaxLayer::setAxis(int)'
/usr/bin/ld: /usr/lib/gcc/aarch64-linux-gnu/11/../../../aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::IConcatenationLayer::setAxis(int)'
/usr/bin/ld: /usr/lib/gcc/aarch64-linux-gnu/11/../../../aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::destroyNetwork(nvdla::INetwork*)'
/usr/bin/ld: /usr/lib/gcc/aarch64-linux-gnu/11/../../../aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::createWisdom()'
/usr/bin/ld: /usr/lib/gcc/aarch64-linux-gnu/11/../../../aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::INetwork::addReduce(nvdla::ITensor*, nvdla::PoolingType, nvdla::Weights)'
/usr/bin/ld: /usr/lib/gcc/aarch64-linux-gnu/11/../../../aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::IPoolingLayer::setPoolingPaddingInclusionType(nvdla::PoolingPaddingInclusionType)'
/usr/bin/ld: /usr/lib/gcc/aarch64-linux-gnu/11/../../../aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::destroyWisdom(nvdla::IWisdom*)'
/usr/bin/ld: /usr/lib/gcc/aarch64-linux-gnu/11/../../../aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::INetwork::addTranspose(nvdla::ITensor*, nvdla::Dims4)'
/usr/bin/ld: /usr/lib/gcc/aarch64-linux-gnu/11/../../../aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::IProfile::setUseSoftMaxOptz(bool)'
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/YOLOv11TRT.dir/build.make:269: YOLOv11TRT] Error 1
make[1]: *** [CMakeFiles/Makefile2:83: CMakeFiles/YOLOv11TRT.dir/all] Error 2
make: *** [Makefile:91: all] Error 2

This is confirmed by running a find search:

jetson@jetson:~/Yolo-V11-cpp-TensorRT$ find /usr -name libnvdla_compiler.so -o -name libnvdla_runtime.so -o -name  libnvrm_gpu.so -o -name libnvrm_mem.so
/usr/lib/aarch64-linux-gnu/nvidia/libnvdla_runtime.so
/usr/lib/aarch64-linux-gnu/nvidia/libnvrm_mem.so
/usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so

So I decided to try with docker since there exists nvcr.io/nvidia/l4t-jetpack:r36.4.0.
And I see a similar issue during the docker build command, but this time, it’s missing more libraries: libnvdla_runtime.so, libnvrm_gpu.so and libnvrm_mem.so:

18.81 [100%] Linking CXX executable YOLOv11TRT
18.94 /usr/bin/ld: warning: libnvdla_compiler.so, needed by /usr/lib/aarch64-linux-gnu/libnvinfer.so, not found (try using -rpath or -rpath-link)
19.03 /usr/bin/ld: warning: libnvdla_runtime.so, needed by /usr/local/cuda/compat/libnvcudla.so, not found (try using -rpath or -rpath-link)
19.04 /usr/bin/ld: warning: libnvrm_gpu.so, needed by /usr/local/cuda/compat/libcuda.so, not found (try using -rpath or -rpath-link)
19.04 /usr/bin/ld: warning: libnvrm_mem.so, needed by /usr/local/cuda/compat/libcuda.so, not found (try using -rpath or -rpath-link)
19.28 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuProfilerV2UnbindPmResources'
19.28 /usr/bin/ld: /usr/local/cuda/compat/libnvcudla.so: undefined reference to `nvdla::IRuntime::getOutputTaskStatisticsDesc(int, NvDlaRuntimeTaskStatisticsDesc*)'
19.28 /usr/bin/ld: /usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::IProfile::setUseSoftMaxOptz(bool)'
19.28 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmMemMap'
19.28 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuDeviceCacheControl'
19.28 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuClockGetDomains'
19.28 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmMemCacheSyncForDevice'
19.28 /usr/bin/ld: /usr/local/cuda/compat/libnvcudla.so: undefined reference to `nvdla::createSyncStrideSemaphore(NvDlaSemaphoreRec const*, unsigned int)'
19.28 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuDeviceOpen'
19.29 /usr/bin/ld: /usr/local/cuda/compat/libnvcudla.so: undefined reference to `nvdla::IRuntime::bindSubmitEvent(int, NvDlaSyncEventType, nvdla::ISync*, int*)'
19.29 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuRegOpsSessionSetHwpmContextSwitchMode'
19.29 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuRegOpsSessionSetPcSamplingMode'
19.29 /usr/bin/ld: /usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::INetwork::addReshape(nvdla::ITensor*, nvdla::Dims4)'
19.29 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuRegOpsSessionGetTimeoutMode'
19.29 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuProfilerV2ReservePmResource'
19.29 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuRegOpsSessionPerfbufMap'
19.29 /usr/bin/ld: /usr/local/cuda/compat/libnvcudla.so: undefined reference to `nvdla::IRuntime::translateRawStatsToCsv(char const*, float, nvdla::CsvStatisticsContainer&)'
19.29 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuRegOpsSessionExec'
19.29 /usr/bin/ld: /usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::INetwork::addSlice(nvdla::ITensor*, nvdla::Weights, nvdla::Weights, nvdla::Weights, nvdla::Weights, nvdla::SliceLayerMode)'
19.29 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuLibGetVersionInfo'
19.30 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuRegOpsSessionCreateForChannel'
19.30 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuProfilerV2RegOpsExec'
19.30 /usr/bin/ld: /usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::destroyNetwork(nvdla::INetwork*)'
19.30 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuRegOpsSessionCreateChannelless'
19.30 /usr/bin/ld: /usr/local/cuda/compat/libnvcudla.so: undefined reference to `nvdla::IRuntime::submit(bool, bool, unsigned int, unsigned int, nvdla::ISync**)'
19.30 /usr/bin/ld: /usr/local/cuda/compat/libnvcudla.so: undefined reference to `nvdla::IRuntime::bindOutputTaskStatistics(int, NvDlaMemDescRec)'
19.30 /usr/bin/ld: /usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::createWisdom()'
19.30 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmMemUnmap'
19.30 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuProfilerV2GetInfo'
19.30 /usr/bin/ld: /usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::IConcatenationLayer::setAxis(int)'
19.30 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuClockCloseAsyncReq'
19.30 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuRegOpsSessionClose'
19.30 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuRegOpsSessionSetTimeoutMode'
19.30 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmMemHandleFree'
19.30 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmMemCacheSyncForCpu'
19.31 /usr/bin/ld: /usr/local/cuda/compat/libnvcudla.so: undefined reference to `nvdla::createSyncSemaphore(NvDlaSemaphoreRec const*)'
19.31 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuDeviceReadTimeNs'
19.31 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuDeviceGetInfo'
19.31 /usr/bin/ld: /usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::IProfile::setCanCompressStructuredSparseWeights(bool)'
19.31 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuLibListDevices'
19.31 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuProfilerV2PmaStreamAlloc'
19.31 /usr/bin/ld: /usr/local/cuda/compat/libnvcudla.so: undefined reference to `nvdla::IRuntime::registerTaskStatistics(NvDlaMemDescRec, NvDlaAccessType)'
19.31 /usr/bin/ld: /usr/local/cuda/compat/libnvcudla.so: undefined reference to `nvdla::destroySync(nvdla::ISync*)'
19.31 /usr/bin/ld: /usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::INetwork::addReduce(nvdla::ITensor*, nvdla::PoolingType, nvdla::Weights)'
19.32 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuClockWaitAsyncReq'
19.32 /usr/bin/ld: /usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::IPoolingLayer::setPoolingPaddingInclusionType(nvdla::PoolingPaddingInclusionType)'
19.32 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuLibOpen'
19.32 /usr/bin/ld: /usr/local/cuda/compat/libnvcudla.so: undefined reference to `nvdla::IRuntime::unregisterTaskStatistics(NvDlaMemDescRec)'
19.32 /usr/bin/ld: /usr/local/cuda/compat/libnvcudla.so: undefined reference to `nvdla::destroyRuntime(nvdla::IRuntime*)'
19.32 /usr/bin/ld: /usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::destroyWisdom(nvdla::IWisdom*)'
19.32 /usr/bin/ld: /usr/local/cuda/compat/libnvcudla.so: undefined reference to `nvdla::createRuntime()'
19.32 /usr/bin/ld: /usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::IProfile::setCanGenerateDetailedLayerwiseStats(bool)'
19.32 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuClockGetPoints'
19.32 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuProfilerV2BindPmResources'
19.32 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuRegOpsSessionSetPowergateMode'
19.32 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuDeviceClose'
19.32 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuClockSet'
19.33 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuProfilerV2PmaStreamFree'
19.33 /usr/bin/ld: /usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::INetwork::addTranspose(nvdla::ITensor*, nvdla::Dims4)'
19.33 /usr/bin/ld: /usr/local/cuda/compat/libnvcudla.so: undefined reference to `nvdla::createSyncSyncpoint(NvDlaFenceRec const*)'
19.33 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuDeviceGetCpuTimeCorrelationInfo'
19.33 /usr/bin/ld: /usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::createNetwork()'
19.33 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmMemHandleAllocAttr'
19.33 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuRegOpsSessionSetSmpcContextSwitchMode'
19.33 /usr/bin/ld: /usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::IProfile::setCanGenerateLayerwiseStats(bool)'
19.33 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuProfilerV2ReleasePmResource'
19.33 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuProfilerV2PmaStreamUpdateState'
19.34 /usr/bin/ld: /usr/local/cuda/compat/libnvcudla.so: undefined reference to `nvdla::IRuntime::getNumOutputTaskStatistics(int*)'
19.34 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuProfilerV2Close'
19.34 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuProfilerV2CreateForDevice'
19.34 /usr/bin/ld: /usr/local/cuda/compat/libcuda.so: undefined reference to `NvRmGpuRegOpsSessionPerfbufUnmap'
19.34 /usr/bin/ld: /usr/local/cuda/compat/libnvcudla.so: undefined reference to `nvdla::IRuntime::appendDiagnosticLoadable(int*)'
19.34 /usr/bin/ld: /usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::ISoftMaxLayer::setAxis(int)'
19.34 /usr/bin/ld: /usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::INetwork::addResize(nvdla::ITensor*, nvdla::ResizeMode, nvdla::Weights)'
19.35 collect2: error: ld returned 1 exit status
19.36 make[2]: *** [CMakeFiles/YOLOv11TRT.dir/build.make:191: YOLOv11TRT] Error 1
19.36 make[1]: *** [CMakeFiles/Makefile2:83: CMakeFiles/YOLOv11TRT.dir/all] Error 2
19.36 make: *** [Makefile:91: all] Error 2

Interestingly enough, if I remove the make command from docker, docker build ... completes (no error), then docker run ..., I can cd into the build directory, run make and everything links properly and all libraries are found:

jetson@jetson:~/Yolo-V11-cpp-TensorRT$ docker run -it  --runtime=nvidia  ...

root@jetson:/tmp# find /usr -name libnvdla_compiler.so -o -name libnvdla_runtime.so -o -name  libnvrm_gpu.so -o -name libnvrm_mem.so
/usr/lib/aarch64-linux-gnu/nvidia/libnvdla_compiler.so
/usr/lib/aarch64-linux-gnu/nvidia/libnvdla_runtime.so
/usr/lib/aarch64-linux-gnu/nvidia/libnvrm_mem.so
/usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so

root@jetson:/tmp# dpkg -S libnvdla_compiler.so
dpkg-query: no path found matching pattern *libnvdla_compiler.so*

I searched and found similar issues:

but I’m afraid it doesn’t help in my case, maybe because I’m using Jetpack 6.1 or 6.2?
One thing I noticed is that when in docker, tensorrt seems to be for jp6.1:

root@jetson:/tmp# dpkg -l | grep tensor
ii  nvidia-tensorrt                      6.1+b123                                    arm64        NVIDIA TensorRT Meta Package
ii  nvidia-tensorrt-dev                  6.1+b123                                    arm64        NVIDIA TensorRT dev Meta Package
ii  tensorrt                             10.3.0.30-1+cuda12.5                        arm64        Meta package for TensorRT
ii  tensorrt-libs                        10.3.0.30-1+cuda12.5                        arm64        Meta package for TensorRT runtime libraries

while on host, it says jp6.2:

jetson@jetson:~ $ dpkg -l | grep tensorrt
ii  nvidia-tensorrt                            6.2+b77                                     arm64        NVIDIA TensorRT Meta Package
ii  tensorrt                                   10.3.0.30-1+cuda12.5                        arm64        Meta package for TensorRT
ii  tensorrt-libs                              10.3.0.30-1+cuda12.5                        arm64        Meta package for TensorRT runtime libraries

That wouldn’t explain why libnvdla_compiler.so would be missing anyway.

Anybody has any idea why this is happening and how to fix it?

Thanks,
Alain

Hi,

It looks like you originally compiled the YOLOv11TRT natively (outside of the docker).
If your device doesn’t have libnvdla_compiler.so, please find it from the link below:

https://repo.download.nvidia.com/jetson#Jetpack%206.1/6.2

nvidia-l4t-dla-compiler_36.4.3-20250107174145_arm64.deb

Based on the log of find, it seems there is no libnvdla_compiler.so? Only libnvdla_runtime.so is found.
Could you help to confirm this?

The library (libnvdla_compiler.so) belongs to the Jetson OOT driver and usually won’t be installed inside the container.
Instead, it will be mounted from the host through the --runtime nvidia command.

Thanks.

Thanks, after installing the DLA compiler package you pointed to, I can now successfully link the executable on the host.

Now remains the issue of automating the build from within docker. The command with added debug prints looks like the following:

RUN mkdir -p /tmp/Yolo-V11-cpp-TensorRT/build \
    && cd /tmp/Yolo-V11-cpp-TensorRT/build \
    && cmake .. \
    && echo "!!!!! LD_LIBRARY_PATH=${LD_LIBRARY_PATH} !!!!!" \
    && env \
    && ldd /usr/local/cuda/compat/libcuda.so \
    && echo "@@@@@ " && find /usr -name libnvrm_gpu.so  -o -name  libnvrm_mem.so && echo "%%%%%"\
    && make -j$(nproc)

and output is:

5.043 !!!!! LD_LIBRARY_PATH=/usr/local/cuda/compat:/usr/local/cuda/lib64:/usr/lib/aarch64-linux-gnu:/usr/lib/aarch64-linux-gnu/nvidia:/usr/local/cuda/lib64: !!!!!
5.045 NVIDIA_VISIBLE_DEVICES=all
5.045 CMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc
5.045 CUDNN_LIB_INCLUDE_PATH=/usr/include
5.045 PWD=/tmp/Yolo-V11-cpp-TensorRT/build
5.045 NVIDIA_DRIVER_CAPABILITIES=all
5.045 CUDA_BIN_PATH=/usr/local/cuda/bin
5.045 HOME=/root
5.045 CUDACXX=/usr/local/cuda/bin/nvcc
5.045 CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda
5.045 OPENCV_VERSION=4.11.0
5.045 NVCC_PATH=/usr/local/cuda/bin/nvcc
5.045 SHLVL=1
5.045 CUDA_NVCC_EXECUTABLE=/usr/local/cuda/bin/nvcc
5.045 CUDNN_LIB_PATH=/usr/lib/aarch64-linux-gnu
5.045 LD_LIBRARY_PATH=/usr/local/cuda/compat:/usr/local/cuda/lib64:/usr/lib/aarch64-linux-gnu:/usr/lib/aarch64-linux-gnu/nvidia:/usr/local/cuda/lib64:
5.045 CUDA_HOME=/usr/local/cuda
5.045 PATH=/usr/local/cuda/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
5.045 DEBIAN_FRONTEND=noninteractive
5.045 _=/usr/bin/env
5.045 OLDPWD=/tmp
5.054   linux-vdso.so.1 (0x0000ffff97e55000)
5.054   libm.so.6 => /usr/lib/aarch64-linux-gnu/libm.so.6 (0x0000ffff95510000)
5.054   libc.so.6 => /usr/lib/aarch64-linux-gnu/libc.so.6 (0x0000ffff95360000)
5.054   /lib/ld-linux-aarch64.so.1 (0x0000ffff97e1c000)
5.054   libdl.so.2 => /usr/lib/aarch64-linux-gnu/libdl.so.2 (0x0000ffff95340000)
5.054   librt.so.1 => /usr/lib/aarch64-linux-gnu/librt.so.1 (0x0000ffff95320000)
5.054   libpthread.so.0 => /usr/lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffff95300000)
5.054   libnvrm_gpu.so => not found
5.054   libnvrm_mem.so => not found
5.055 @@@@@ 
5.252 %%%%%

so LD_LIBRARY_PATH is set properly, but ldd /usr/local/cuda/compat/libcuda.so reports missing libraries libnvrm_gpu.so and libnvrm_mem.so which find command indeed can’t find.
Yet, after removing the makeand docker build completes, I can run the container with `dockand all these libraries can be found:

root@jetson:/tmp# ldd /usr/local/cuda/compat/libcuda.so 
        linux-vdso.so.1 (0x0000ffff8f9e0000)
        libm.so.6 => /usr/lib/aarch64-linux-gnu/libm.so.6 (0x0000ffff8d0a0000)
        libc.so.6 => /usr/lib/aarch64-linux-gnu/libc.so.6 (0x0000ffff8cef0000)
        /lib/ld-linux-aarch64.so.1 (0x0000ffff8f9a7000)
        libdl.so.2 => /usr/lib/aarch64-linux-gnu/libdl.so.2 (0x0000ffff8ced0000)
        librt.so.1 => /usr/lib/aarch64-linux-gnu/librt.so.1 (0x0000ffff8ceb0000)
        libpthread.so.0 => /usr/lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffff8ce90000)
        libnvrm_gpu.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so (0x0000ffff8ce10000)
        libnvrm_mem.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_mem.so (0x0000ffff8cdf0000)
        libnvos.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvos.so (0x0000ffff8cdc0000)
        libnvsocsys.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvsocsys.so (0x0000ffff8cda0000)
        libnvtegrahv.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvtegrahv.so (0x0000ffff8cd80000)
        libnvrm_sync.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_sync.so (0x0000ffff8cd60000)
        libnvsciipc.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvsciipc.so (0x0000ffff8cd20000)
        libnvrm_chip.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_chip.so (0x0000ffff8cd00000)
        libnvrm_host1x.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_host1x.so (0x0000ffff8ccd0000)
root@jetson:/tmp# echo "@@@@@ " && find /usr -name libnvrm_gpu.so  -o -name  libnvrm_mem.so && echo "%%%%%"
@@@@@ 
/usr/lib/aarch64-linux-gnu/nvidia/libnvrm_mem.so
/usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so
%%%%%

If I do no specify --gpus all --runtime nvidia in the docker run ... command, the libraries are indeed missing.
How shall I get my tensorRT application link from a docker build command?

Oh I think I found the fix.
I updated /etc/docker/daemon.json and restarted docker with sudo systemctl restart docker

jetson@jetson:~$ cat /etc/docker/daemon.json
{
    "runtimes": {
        "nvidia": {
            "args": [],
            "path": "nvidia-container-runtime"
        }
    },
    "default-runtime": "nvidia"      # <== added this
}

Looks like it all working fine now.

Thanks for the support!