Cannot Find libcuda.so in target filesystem of nvidia jetpack 6.1 docker

Hi everyone,

I’m trying to move to the new Jetpack 6.1 cross-compile docker that nvidia put out (nvcr.io/nvidia/jetpack-linux-aarch64-crosscompile-x86:6.1). However I’m having trouble compiling my application in it.

I had previously used the cross-compile docker for 5.1.2, and after setting up my application (that uses CUDA) in it using CMAKE for the build system it worked perfectly.
So now, I pulled the latest container image for 6.1 and tried to compile. I had to update some paths and add some libraries in the target_link_directories + target_link_libraries that were not needed for the previous CUDA but that’s expected.

My problem is that 1 specific library is not being found. I’m getting this error:
/l4t/toolchain/aarch64--glibc--stable-2022.08-1/bin/../lib/gcc/aarch64-buildroot-linux-gnu/11.3.0/../../../../aarch64-buildroot-linux-gnu/bin/ld: warning: libcuda.so, needed by /l4t/targetfs/lib/aarch64-linux-gnu/libnvcudla.so, not found (try using -rpath or -rpath-link)

I’ve checked and this file exists in 2 places:
/l4t/targetfs/usr/lib/aarch64-linux-gnu and /l4t/targetfs/usr/lib/aarch64-linux-gnu/nvidia
In the first directory it’s just a symlink to the same named file in the child directory, and there, it’s also a symlink to libcuda.so.1.1 which is also in the nvidia directory.
Both of these directories are included in my CMAKE file under the target_link_directories parameter.

Here are parts of the CmakeList.txt file:

set(SYSROOT_DIR_PATH /l4t/targetfs)

target_link_directories(${APP_TARGET_NAME} PUBLIC 
                          ${SYSROOT_DIR_PATH}/usr/lib
                          ${SYSROOT_DIR_PATH}/usr/local/lib
                          ${SYSROOT_DIR_PATH}/usr/local/lib/glib-2.0/include/
                          ${SYSROOT_DIR_PATH}/usr/local/cuda/lib64
                          ${SYSROOT_DIR_PATH}/usr/local/cuda-12.6/targets/aarch64-linux/lib
                          ${SYSROOT_DIR_PATH}/opt/nvidia/cupva-2.5/lib/aarch64-linux-gnu
                          ${SYSROOT_DIR_PATH}/lib/aarch64-linux-gnu
                          ${SYSROOT_DIR_PATH}/usr/lib/aarch64-linux-gnu
                          ${SYSROOT_DIR_PATH}/usr/lib/aarch64-linux-gnu/nvidia
                          ${SYSROOT_DIR_PATH}/usr/lib/aarch64-linux-gnu/tegra-egl

)

# AND

set(NVIDIA_LIBS 
                          # very long list of libraries
                          cuda
                          cudart
                          nvcudla
                          # etc.
)

target_link_libraries(${SWIFT_TARGET_NAME} PUBLIC
                          ${BOOST_LIBS}
                          ${GENERAL_LIBS}
                          ${GSTREAMER_LIBS}
                          ${NVIDIA_LIBS}
)

As the linker error says that the libcuda.so is required by libnvcudla.so, I checked with objdump that it really is needed, and it is:

/bin/objdump /l4t/targetfs/usr/lib/aarch64-linux-gnu/libnvcudla.so -p | grep NEEDED

  NEEDED               libc.so.6
  NEEDED               libdl.so.2
  NEEDED               libcuda.so
  NEEDED               libstdc++.so.6
  NEEDED               libnvdla_runtime.so

By the way, you can see that the last library here is libnvdla_runtime. I did not originally have this in my list of NVIDIA_LIBS in my CMAKE file, and I did get a linker error, however, I checked where it was (/l4t/targetfs/usr/lib/aarch64-linux-gnu/nvidia), added it to my list of NVIDIA_LIBS, and the linker no longer gives and error about this library.

I also checked, using ldd, to see where libnvcudla.so was looking for libcuda.so. However, it could not work properly in the docker. It said that the file was not a dynamic executable. So, instead, I have a devkit installed with Jetpack 6.1 and ran ldd on libnvcudla.so there. This is the result:

linux-vdso.so.1 (0x0000ffff80900000)
libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000ffff806c0000)
libdl.so.2 => /lib/aarch64-linux-gnu/libdl.so.2 (0x0000ffff806a0000)
libcuda.so => /lib/aarch64-linux-gnu/libcuda.so (0x0000ffff7de30000)
libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000ffff7dc00000)
libnvdla_runtime.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvdla_runtime.so (0x0000ffff7d5a0000)
/lib/ld-linux-aarch64.so.1 (0x0000ffff808c7000)
libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000ffff7d500000)
librt.so.1 => /lib/aarch64-linux-gnu/librt.so.1 (0x0000ffff7d4e0000)
libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffff7d4c0000)
libnvrm_gpu.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so (0x0000ffff7d440000)
libnvrm_mem.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_mem.so (0x0000ffff7d420000)
libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000ffff7d3f0000)
libnvrm_host1x.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_host1x.so (0x0000ffff7d3c0000)
libnvsocsys.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvsocsys.so (0x0000ffff7d3a0000)
libnvos.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvos.so (0x0000ffff7d370000)
libnvtegrahv.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvtegrahv.so (0x0000ffff7d350000)
libnvrm_sync.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_sync.so (0x0000ffff7d330000)
libnvsciipc.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvsciipc.so (0x0000ffff7d2f0000)
libnvrm_chip.so => /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_chip.so (0x0000ffff7d2d0000)

All of the libraries here have been added to the NVIDIA_LIBS in my CMAKE file. I did notice that for some reason it says that libcuda.so is in /lib/aarch64-linux-gnu and not /usr/lib/aarch64-linux-gnu like the rest, but it shouldn’t really matter, since I checked and /lib is just a symlink to /usr/lib in the docker target filesystem.

So, seeing as the file exists in the correct directories, and that other libraries and symlinks to other libraries were added and they were also in the same directories, I simply have no idea where to continue from here. Any help would be greatly appreciated.

Thanks

Hi,

There are some changes as we move some drivers to the nvidia-oot on JetPack 6.
Could you follow the below instructions to compile the CUDA sample to see if it can work?

Thanks.

I managed to find a solution.

I needed to add a compile flag in my CMakeList.txt file of my project.
The flag:

set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS}  -Wl,-rpath-link,/l4t/targetfs/usr/lib/aarch64-linux-gnu:/l4t/targetfs/usr/lib/aarch64-linux-gnu/nvidia")
set(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE)

I added this right after the target_link_directories directive and it now compiles and runs on a Orin NX with Jetpack 6.1. I’m not entirely sure why I needed this as I’m pretty sure that target_link_directories is supposed to do this, and I even checked the “link.txt” file that gets generated and has the linkage information and flags, and it did already have this in between all the libraries it links against.

Either way, I wrote this here in case it helps someone else.

Good luck.

Hi,

Thanks for sharing this info.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.