Missing aarch64 NPP .so files on Host PC for cross-compilation

I have just started porting a program to the TX1. I would like to cross-compile it on the host.

Host: Ubuntu
Target: TX1
Jetpack: 3.3

I managed to use the correct toolchain to compile, but at link time, the linker cannot find the .so files for the NPP libraries (ex: libnppial.so). Comparing the contents of /usr/local/cuda-9.0/targets/aarch64-linux/lib and /usr/local/cuda-9.0/targets/x86_64-linux/lib, I discovered that the .so files for libnpp* are present in the x86_64 target but missing in the aarch64 target.

I went over to the TX1 and discovered that its /usr/local/cuda-9.0/targets/aarch64-linux/lib directoryd DOES contain the libnpp*.so files.

My questions:

  1. Did I miss a step when I installed the JetPack on the host PC?
  2. Is there a place where I can just download the aarch64 .so files and install them on the host?
  3. Alternatively, is it safe for me to just copy these libs over from the TX1?

Thanks!

Hi BareMetalCoder,

Is this still a blocking issue on your development?
Was the CUDA installed through JetPack or other place?

Thanks

It was installed through JetPack 3.3.

I took a chance and copied the .so files from the TX1. My compilation completed successfully, and the few functions I tried ended up working correctly, but something came up and I have to put it on ice for a few weeks. I still need to test CUDA in more detail.

Furthermore, I quickly ran into a wall regarding other libraries I needed, so I’m looking into installing the packages I need via apt-get on the TX1, and then making a copy of its rootfs onto my development machine. From there, I could use the copied environment as a sysroot, and then install the results of the compilation to another copy of the rootfs (some sort of staging area for the final rootfs image).

All that being said, it feels like I am manually putting together a workflow that has already been figured out by others such as the fine people who maintain Yocto. Does Nvidia have a recommended course of action for setting up a cross-compilation environment for the TX1?

Thanks.

I tend to put what I need on the Jetson, and then use a loopback mounted clone on my host if I run into anything complicated. Symbolic links or changes to “/etc/ld.so.conf.d/” can work well. rsync can update the clone if you update the Jetson, and it is good to have a clone copy anyway.

Thanks. Would you mind elaborating on the benefits of using a loopback clone over that of just having the files in a directory somewhere on my machine? I’m assuming that by loopback clone, you are talking about mounting a system.bin.raw-like file created using flash.sh?

You are correct. If loopback mounted, then the clone can be exchanged for any other clone (you could for example put in “/usr/local/lib/aarch64…” or whatever, then sym link to the clone mount location…the act of mounting the clone sets up the entire environment). If you have a TK1, a TX1, a TX2, and Xavier, then you only need to mount one clone at a time. I can see the advantage of having a permanent copy if you are doing long term development on just one platform, but consider if you have to exchange with someone you are working with…not that a 16 to 30GB file is easy to exchange, but the results are easy to reproduce.

I like having a clone anyway, and the clone can be rsync updated and used in case of Jetson failure.

That really clarifies it for me. Thanks!
(And that you for all of the other posts of yours on this forum that have helped me up to now!)