I’m following the instructions for newbie cuda examples here
After copying the three specified files ( from Jetson to Linux Host ) the build process complains about a number of other X11-ish libraries not being found, called via libGL, libglut, and libX11. These missing libs are libnvidia-tls.so.21.3, libnvidia-glcore.so.21.3, libXext.so.6, libX1.so.6, libXxf86vm.so.1, libxcb.so.1.
I see these libs exist in the /opt/JetPackTK1-1.1/Linux-for_Tegra/rootfs/usr/lib/arm-linux-gnueabihf directory, and so on, as well as buried in the installation folders of my host’s home directory. They of course also exist on the Jetson, since these were copied during flash.
So should I just include this /opt/… directory during linking? That doesn’t seem right. Should I copy these to my hosts /usr/arm-linux-gnueabihf/… directory? Did I forget to install a package someplace?
Also, why is that Nsight can cross-compile the examples such that it produces a 32-bit ARM binary, but running the make directly from the host NVIDIA_CUDA_6.5_Samples/ directory only produces local x64_64-bit binaries? How can I cross-build the examples on the host as done during the Jetpack installation without going through nsight?
– UPDATE –
I copied everything from the /opt/Jetpack/blablabla directories to /lib/arm-linux-gnueabihf and /usr/lib/arm-linux-gnueabihf directories and added these to the NSight search path (all the -L linker options). That eliminated most of the linker errors, but left the linker still looking for libnvidia-tls.so.21.3 and libnvidia-glcore.so.21.3. After LOTS of hunting around I finally added -Xlinker -rpath-link="/usr/lib/arm-linux-gnueabihf/tegra" and that worked. Funny though, this path was already specified by the -L option. Anyway, now I need to find myself an HDMI monitor to see the GL examples open up.