Note for NVIDIA: The “Tegra Linux Driver Package Development Guide” is missing from archives for the TK1 releases (L4T R21.x and earlier, JetPack3.1 and earlier).
The missing documents will have what you need, but I’ll add some information here (presumably someone will restore the earlier documents).
I don’t have Ubuntu set up on my PC (I use Fedora), but due to Ubuntu having some conveniences in cross compile tools directly from packages, this is probably the way to go. However, the TK1 has been out of feature development now for some time, and only has maintenance patches. Most of the setup you will find references to will refer to a host with Ubuntu 18.04 LTS, whereas development setup for a TK1 will tend to be with a host using Ubuntu 14.04 LTS or Ubuntu 16.04 LTS. Tools available via the “apt” mechanism on newer releases may at times be incompatible with the older TK1 installations, especially if you try to compile bootloader software. Since I am not set up on Ubuntu I cannot give you the details, but the docs for your particular L4T release will. To see your L4T release use “head -n 1 /etc/nv_tegra_release
”.
The link to each release can be found here (you’ll probably need to go there, log in, and go there a second time to see the content):
https://developer.nvidia.com/embedded/linux-tegra-archive
If you need to know which L4T release is used with a given JetPack release, then this URL can help:
https://developer.nvidia.com/embedded/jetpack-archive
Documents for TK1 can be found here (also might require login then second click on link):
https://developer.nvidia.com/embedded/downloads#?tx=$product,jetson_tk1
One point of confusion people sometimes run into is that some of the more recent L4T releases were patches only, and did not include a JetPack release…this was just command line via driver package and sample rootfs. The previous JetPack3.1 was used for package installs of L4T R21.5 through R21.8 since patch releases are compatible with their most recent JetPack. In particular, JetPack3.1 is the last JetPack for a TK1, and thus the last install of optional packages like CUDA. If you had flashed with JetPack3.1, then you would have L4T R21.5, but you could have flashed on command line all the way up to R21.8, and then installed optional packages via JetPack3.1. This is the JetPack3.1 URL:
https://developer.nvidia.com/embedded/jetpack-3_1
I’m hoping the documents are restored so you can use those (which includes tool versions and not just instructions). Meanwhile, if you want to build kernels, then all you need is the cross toolchain. This consists of a cross compiler, cross debugger, so on…tools which run on a PC, but understand 32-bit arm/armhf/ARMv7-a architecture. When you see references to “arm” software, then typically it is 32-bit, but now 64-bit is mainstream, and so you have to be careful to look for the 32-bit architecture (64-bit is arm64/aarch64/ARMv8-a, which you do not want).
Building kernels and kernel components is the simplest you will get. There are no user space libraries to work on, and the Makefile includes options to use a cross compiler. Set a few variables, and you are good to go.
When you get to user space you need both a cross linker runtime (capable of linking libraries), and you need the actual libraries you will link against (the “sysroot”). You will find that sources of the cross tools also provide a sysroot, and up to that point it isn’t too complicated. Often Ubuntu will have a cross toolchain and a runtime available directly as packages, although in some cases the release version may not be compatible (which is why I mentioned hosts used to be specified as Ubuntu 14.04 or 16.04, and not 18.04). The sysroot is where it starts getting complicated.
That sysroot will contain the libc libraries and all kinds of basic infrastructure. However, if you followed the older traditional methods for new hardware, then you’d end up acquiring and building all of the libraries for your user space content beyond those most basic libraries. For example, if you build something for the GUI, then you might need X11 libraries, and these might depend on various OpenGL libraries, language internationalization libraries, security libraries, so on. It is somewhat astounding to see how many libraries some simple programs require. There is a simple workaround to this though.
If you clone your TK1, and loopback mount it on the host PC in the right place, followed by creating a symbolic link to the TK1’s correct subdirectory, then you have the entire sysroot in perfect match to the TK1 you are working on. You’d just need to make sure you installed all of the development content on the TK1, e.g., you might not normally install the development headers for a library, but if you install these to the clone (or better yet, to the TK1 prior to creating the clone), then it will all just be there.
Often you will find the host PC has a directory “/usr/lib
” (or a variant, e.g, “/usr/lib64
”), and that subdirectories may exist for cross architectures, e.g., “/usr/lib/aarch64-linux-gnu
” (or some architecture and environment naming convention). This can be a symbolic link into the correct lib location on the loopback mounted clone.
It is probably best if you wait to get the development guide which is currently missing, but someone will probably get the old TK1 guides back in when they see this post. Meanwhile, I suggest you install any optional development packages you need on the TK1, e.g, headers which are optional under “/usr/include
”, and then look up how to clone the TK1 root filesystem, the rootfs. See:
https://elinux.org/Jetson/Cloning
(incidentally, elinux.org may have some development information too, so I recommend browsing it)