If I want to cross compile an application for the nano then is there a guide to how to tell something like a configure script where libraries are? Like if I’m doing ./configure --build=aarch64-linux-gnu or similar I also need a way to point to the libraries. Since I also have the rootfs in my L4T package I know they’re all there, so I should be able to just link against those if the system can find them, right? Or am I missing something entirely?
An explanation of linking without cross compiler will probably be a good starting point.
When you cross compile a kernel, it is easy, and the default docs explain this. Consider what follows user space development, meaning linking is involved outside of the kernel and it isn’t bare metal.
Normally (on a given running system, not cross architecture), the linker has a default path and linking goes through that linker unless you do something custom. The command “ldconfig -p” will print what the linker sees.
For cross compiling you must either set up a cross linker with a foreign architecture installed to your host PC, or else you must custom point at the location with the files you want to link against. I’ve always used the latter approach, but the former may be easier for Ubuntu users since adding a foreign architecture won’t be as difficult. I develop on Fedora though, so I can’t tell you specifically on doing that on Ubuntu.
If you have the SD card version of Nano, then you can clone the SD card and loopback mount it, then point at that for your libraries. Other than the mount point the location would be the same path as from “ldconfig -p” when on the Nano. There are linker flags you can pass to the cross linker just like linker flags to a non-cross linker, and just name the root to the path of the loopback mount libraries.
Are you aware of any tutorials that may be relevant? Once I see a few examples I can probably sort the rest out.
I assume that I can use the disk image I’ve generate from the sample root filesystem? Again, pointers to any examples would be super helpful.
We provide some steps in L4T document to build L4T kernel or bootloader. However, I am not sure if this matches your expectation.
Please be aware that there is no standard way or document for doing cross-compiling. It depends on each usecase.
Sometimes it is hard to setup those environment on your host. For example, if your application requires lots of 3rdparty libs, you may need to prepare them as arm64 binaries on your host.
Also, it depends on auto tools. For example, opencv is using cmake and weston is using meson or autogen. It is very easy that missing dependencies or even a bug happens when running those auto tools.
You could also try to use qemu-user-static + chroot + our sample rootfs to emulate a arm64 environment on your x86 host and build code directly. This would be less troublesome.
There is also no official guidance for such chroot method but you might refer to some online resources through google search.
I saw that. It was a start, but in this case things broke because I don’t have the arm64 libraries in my path. I may try linuxdev’s solution.
For my goals that was one of the first things I tried. It didn’t last long. For example, apt update worked great, apt upgrade completely hangs after downloading the packages. I even tried doing an install of a simple package and that hung the same way. Apt sitting there at 100% CPU but nothing happening. (Let it run 24 hours trying to install screen.)
You might try this script’s configuration and see if it works better for you.
In testing apt would hang as you describe if /dev/pts was bind mounted. The solution for me was to mount a new instance of devpts within the chroot instead.
Please let me know if you experience the same issue since I would like to confirm that as the cause for my own curiosity.
I have been in awe of all your posts here…and I think you’re psychic…I WAS mounting dev/pts…wow. I will try your script asap.
Thx :) Hope it works. To be clear, my script still mounts /dev/pts … It just doesn’t bind mount it. In testing, for me, I found separate instances of special filesystems were more reliable.
I am working on something fancier and hopefully even more stable in Python. I finished preliminary tests on Friday and hope to publish it on Monday as a replacement for the script I just linked to. I will update in this thread as well.
So apt upgrade is working. One little glitch I noticed:
Preparing to unpack .../033-python3-apport_2.20.9-0ubuntu7.9_all.deb ... Unpacking python3-apport (2.20.9-0ubuntu7.9) over (2.20.9-0ubuntu7.7) ... Preparing to unpack .../034-apport_2.20.9-0ubuntu7.9_all.deb ... Running in chroot, ignoring request. * Stopping automatic crash report generation: apport /etc/init.d/apport: 68: /etc/init.d/apport: cannot create /proc/sys/fs/suid_dumpable: Read-only file system /etc/init.d/apport: 79: /etc/init.d/apport: cannot create /proc/sys/kernel/core_pattern: Read-only file system [fail]
That’s caused by the script mounting /proc as read only, but if the upgrade itself doesn’t actually fail, you should be fine, and the “fail” can safely be ignored. It’s normal when upgrading daemons for them to try and restart themselves. If you’re inside a chroot, that will fail and that’s normal. If the apt upgrade itself is totally failing and blocked at that package, probably the easiest solution would be to temporarily remount /proc as rw :
(from within the chroot)
mount -o remount,rw procfs /proc
But I would recommend only doing that if something is something like apt is completely blocked. The script’s default is ro because it’s safer in case a malfunctining process tries to do something to, say, /dev/hda (or sda, or nvme0). It won’t prevent a chroot exscape or intentional damage to the host from decently written malware, but it will help to prevent accidents. In this case apport should likely not be trying what it’s trying, but there probably isn’t any harm in allowing it rw access this once. /proc is not a real filesystem so even if things are changed, they shouldn’t persist across a reboot.
more on /proc/sys if you’re curious:
Well your script worked great, I’m just struggling with the most important part - various issues in how the backport iwlwifi driver compiles and whatnot. Is there any reasont to not mount the rootfs on a working nano and do the compile/install that way?
Glad to hear it.
It might work. It might not. I’ve never tried it. As far as how to compile the kernel, I follow the instructions in the L4T documentation to cross compile and they work for me (see under Kernel Customization).
I can tell you it’ll be a lot faster outside of a qemu chroot, even if it does work. One small note about JetPack 4.3 you might find useful is that the way apply_binaries.sh has changed and if you use your own kernel you will probably want to add the ‘-t’ option when you apply the binaries so that it doesn’t overwrite your kernel. Otherwise it’ll use the OTA kernel from a debian package. I suspect Nvidia will change the way the script operates in the next release so this isn’t necessary.
So I pushed out an update to my little collection of scripts I have going. You may find tegrity-qemu a better replacement for the enter_chroot.sh
It’s smarter about how it does what it does.
You can do “sudo tegrity-qemu path/to/rootfs --enter” to enter a rootfs in the same way.
There are some other scripts included as well (if you read the README.md, but they’re very untested, however you may find tegrity-toolchain useful to easily download and install Nvidia’s recommended toolchain. The tegrity-kernel script does not yet support adding custom drivers, so I’m not sure how much use it would be to you.