The short answer is that you can, but function may be limited.
The documents you see are for basically flashing the system. The non-rootfs partitions are used for boot, and these probably don’t need much change, if any. So long as the environment is set up correctly before loading a kernel which can live in that environment.
The “normal” flash takes the host PC’s “
Linux_for_Tegra/rootfs/”, overlays some NVIDIA-specific drivers (when done manually this is the “
sudo ./apply_binaries.sh” step), and the content for the rootfs flash image is nearly complete. Depending on arguments passed to the flash.sh script other content will be added in “
rootfs/boot/” (which becomes “
/” when in the eMMC of the Jetson). The main edit during flash.sh (prior to becoming a rootfs image) is to add kernel and extlinux.conf parameters (there may be more content added, but if you log a flash, then you’ll know exactly what is added).
One of the first things to understand is the device tree. The kernel used to have issues with many invidual hardware designs needing their own code in the kernel even though it was a single driver for some particular piece of hardware. Linux Torvalds ended up abstracting this out by passing some of the generic driver content (e.g., what address the device is at, or what clock speed to set) into the device tree (which is separate from the kernel despite being tightly coupled to it). Those drivers will be given device tree information, and if the information is incorrect, then something “bad” will happen (e.g., non-function, erratic function, or outright crash). This is akin to passing an invalid argument to a function, except the function is the driver. Changing the driver might or might not require changing the device tree.
Then there is the kernel itself (which may require a different device tree even though it is the “same” driver, but “newer” driver). Going from such an old kernel into a brand new kernel means there are going to be a lot of drivers to go through and make sure device trees are updated. Those changes could require detailed knowledge of the driver and/or hardware.
One other thing the previously mentioned “
sudo ./apply_binaries.sh” script does is to overlay certain user space libraries and content onto the rootfs. There are times when user space software requires a specific API or ABI be present. In particular, content which is released in binary-only format, and which is loaded by other software, requires that the other software remain compiled in a compatible manner.
The most important example is the GPU driver. This is usually loaded by the Xorg GUI software, and its use goes beyond just hardware accelerated rendering. X11 itself is often considered (incorrectly) to be “desktop” software. The “desktop manager” software is what provides most of the GUI, and this is what the X server does…the X server provides a uniform interface to what is essentially a buffer with function useful to a monitor. The X server interprets events. Whether or not a monitor is attached, and whether or not it is a monitor and mouse/keyboard which uses or produces these events, is secondary.
CUDA drivers (for much of the software you would use even if you don’t display this) often receive their work through the X server. The X server loads a GPU driver, and the events are used as a kind of language in many cases. Much of the CUDA function requires X even though the buffer is not attached to any monitor. Since this driver is binary only, and because the X server directly links to this file, it becomes mandatory the the X server release stays constant in order to be compatible with the driver. If you go to the Jetson and run this command (assuming your “DISPLAY” variable is “:0”), then you’ll see how this is related:
egrep -i '(vendor|loadmodule|dlloader|abi)' /var/log/Xorg.0.log
You could also browse that log file.
This means some content, especially the Xorg server, must be built with that same ABI if you want CUDA to work right, and if you want to use the NVIDIA hardware accelerated driver. If you use Nouveau, then this will fail. Using an Xorg server from something like Ubuntu 14.04 LTS on Ubuntu 20.04 will not be trivial. Imagine downgrading even glibc to work with this Xorg, or perhaps rebuilding all of the Xorg software to run under the Ubuntu 20.04 glibc.
Note that the driver is for a GPU with direct connection to the memory controller, and that the desktop drivers are not only the wrong architecture, they are also for PCI devices. It isn’t possible to use a newer driver on arm32 because they don’t exist. TK1 new feature additions ended quite some time ago, and this would be difficult with the arm64 systems, but would still be much easier in the 64-bit world because it still progresses.
So you can make 20.04 work. It just might not do the things you most want this to do, and if it does, then it is because you did a lot of work.