About JetPack and Driver Packages…
JetPack is just a front end and either acts like a download manager or else runs the actual flash software (depending on if you are flashing or adding packages). In the following URLs you may have to log in and then hit the URL again before the page shows up correctly, but if you go to a JetPack page for a particular version from here:
https://developer.nvidia.com/embedded/jetpack-archive
…and then drill down to the JetPack3.1 listing to go here:
https://developer.nvidia.com/embedded/jetpack-3_1
…you will see “JetPack 3.1 introduces L4T 28.1”.
Note that R28.1 is actually flashed via command line software available directly as the “driver package”. Going here for a list of driver packages:
https://developer.nvidia.com/embedded/linux-tegra-archive
…you will find R28.1 is here:
https://developer.nvidia.com/embedded/linux-tegra-r281
This latter R28.1 URL lists everything associated with that particular release, whereas JetPack is a container for downloading those components related to R28.1. JetPack3.1 actually downloads the R28.1 driver package, R28.1 documentation, R28.1 sample rootfs, and most anything from the R28.1 URL. When you tell JetPack to flash a Jetson, what it does is to run something like this on command line (this assumes the Jetson is in recovery mode with the micro-B USB connected):
sudo ./flash.sh jetson-tx1 mmcblk0p1
A variation on this to use the most possible eMMC:
sudo ./flash.sh <b>-S 14580MiB</b> jetson-tx1 mmcblk0p1
The part where JetPack is most convenient is that it downloads a manifest of URLs and software compatible with either your desktop PC or the Jetson for that release. It then gives a list of possible compatible components for you to choose from, and will basically use a combination of wget and ssh download and install those components over ethernet.
The individual URLs for various components are not published separately. You can run JetPack, and then if you know which files to look in, you can manually run wget. You’d also have to know about install order of packages if you want to avoid frustration.
JetPack works only from an Ubuntu PC host. Command line flash works from any 64-bit Linux distribution. I normally use Fedora, so this is where I live. Sometimes I run JetPack from an old dual core Atom laptop, and then copy the files with URLs to my host where I manually run various package downloads or installs (I initially run from the laptop, it is more convenient…but I save those extra files and can later avoid using the laptop…it has a massive 10.1" screen).
About Kernel Building In the Early Days of R24.x…Skip this if You Don’t Have Time for a Story…
First, the Image file is used with R28.1 (not the zImage). When you build zImage it first builds Image, and then creates a compressed zImage version. It doesn’t hurt to build zImage, but it is wasted space.
Second, you can use the configuration tegra21_defconfig, but you are better off not trusting this to be the same as your installed system. It might be the same, but you don’t know. On an unmodified system always start with the “/proc/config.gz” being copied somewhere safe for permanent reference. Also write down the result of “uname -r” for that config. If you build with that config and CONFIG_LOCALVERSION set to match the suffix of “uname -r”, then you have an exact 100% match for your starting configuration. If you build another configuration, then you might be building something completely unrelated and there is no base for knowing if the error is due to a configuration or due to some bug.
Any R24.x release should be suspect as to why it is being used. The R24.x series was the very first TX1 series. There was a transition going on at the time from 32-bit to 64-bit, and there wasn’t much (if any) previous 64-bit code or experience to go by. This was new. Additionally, the first R24.x was 64-bit only in kernel space…user space was still 32-bit. A 64-to-32-bit conversion was going on while user space packages were being updated from antique armhf software. The result is that the building of kernels had some differences. R24.1 for sure required both a 64-bit compiler and a 32-bit compiler simultaneously during a kernel build. At some point the requirement for the 32-bit compiler was removed (I’m not sure at which point…perhaps R24.2.1 did not need this, but I can’t say for sure). Here is an example of kernel build (cross-compile) from R24.1:
https://devtalk.nvidia.com/default/topic/936880/jetson-tx1/jetson-tx1-24-1-release-need-help-with-complier-directions-can-not-complie/post/4885136/#4885136
In those instructions for R24.1 you will note that this is being set to a 32-bit cross compiler:
export CROSS32CC=
The earliest of R24.x had some bug fixes as well, but I think those were not needed for R24.2.1 (I can’t swear to it though).
Explaining why two compilers were required isn’t really what you’ve asked, but it is still useful to know what you are getting into if you run R24.x series. And to start that it is easier to explain a difference from desktop PC history. So hopefully you will forgive me for going so seeming far from the topic with what follows.
In the desktop PC world everything used to be 32-bit. Basically i386 through i686, sometimes just labelled x86. Compilers were dedicated to this one architecture in the PC world. Then came 64-bit, and things got more complicated. 64-bit CPUs were backwards compatible in the ability to execute 32-bit code. This new CPU’s native instruction set was the 64-bit x86_64. Previous compilers in the 32-bit days were natively 32-bit. On a 64-bit system the backwards compatibility to x86 32-bit required a cross-compiler as native compile was a different x86_64 architecture. This backwards compatible 32-bit architecture is foreign on a 64-bit system despite the 64-bit system’s ability to execute the code. However, all of the newer Pentium and further 64-bit systems supported this backwards compatible 32-bit code, and so the compilers for both native 64-bit and for foreign 32-bit existed in the same compiler.
ARM architecture has a similar story when going from 32-bit to 64-bit, but the compilers are not integrated together in a single package. Furthermore, the 32-bit compatibility mode is absolutely terrible in performance even if there is no conversion back and forth between 64-bit kernels and 32-bit applications…a desktop PC also has a conversion penalty, but the penalty much more harsh in the ARM world.
On the desktop PC, if one wants to run a 64-bit app, then the 64-bit linker is used, and the 64-bit libraries can be dynamically used in combination. When a 32-bit application runs, then you need a second linker for 32-bit…and you need a bunch of compat libraries which are 32-bit. This was really common in the PC world, but ARM does not do this. Embedded systems are much smaller and what the PC essentially did of installing two operating systems for the user space 32-bit and 64-bit world isn’t practical on tiny embedded devices. Installing compat 32-bit (armhf/ARMv7-a with NEON and hardware floating point) plus native 64-bit (ARMv8-a/arm64/aarch64) was never set up for the two to exist simultaneously. The two can coexist, but it is a very manual and tedious operation to put both on the same system at the same time. It isn’t something a sane person wants to do unless it is mandatory (or if the sane person wants to go insane…sort of like that commercial where some people like paper cuts on the tongue and so they do this on purpose).
Back to Kernel Build…
Do you really need to stick to the R24.x install? Is there any chance you could flash to R28.1? The only reason I know of that anyone sticks to these earlier installs is if there is some special 32-bit program compatibility which is mandatory. The earlier URL on R24.1 will tell you about how to compile from that series, but probably the bugs listed won’t need fix (or some might), and maybe (not sure) R24.2.1 won’t need the second compiler (if you get a “vdso” error, then this is probably a 32-bit issue). However, the rest of that URL is still the explanation of building for that kernel.