Cross compilation for Jetson TK1 in host PC without having a development board

Is this possible to do cross compilation setup in host machine for Jetson TK1 without having a development board?
If yes, then what is the procedure for that?

Host pc : ubuntu 18.04
SDK Manager:

Let me know if any other information required.

I couldn’t give you exact details (especially since I work on Fedora), but general development from cross platform is common. Basically, you would install cross tools for 32-bit arm32/ARMv7-a/armhf. The more recent platforms are 64-bit arm64/aarch64/ARMv8-a. You can still get 32-bit (especially via third party vendors, e.g., the Apalis TK1 from, but the software stopped development some time back (other than perhaps bug fixes).

You can find a listing of the L4T releases which are supported (basically Ubuntu with NVIDIA direct hardware access drivers, and these are what JetPack/SDK Manager install to the Jetson):

A list of JetPack releases is here:

Note: You will probably have to go to the URL, log in, and then go there again to see the actual content.

Third parties also have support software since there will be modifications when using a different carrier board or module.

Note: Newer tool releases may not work on a TK1 (32-bit). Much of the software was written long ago and may require a 4.x release of gcc and other tools.

About Cross Compile…

If you are working on “bare metal”, which is what bootloaders and kernels run on, then it is quite simple. You just install the cross tool chain and build naming the alternate tools.

As soon as you get to user space it gets more complicated since there are now libraries and linkers involved. The linkers must then run on desktop PC architecture, but understand armhf (or whatever cross architecture you are interested in). Libraries being linked against must also be present or the linker has no way to know what it is working with.

Ubuntu has several of the cross tools available natively via the apt mechanism. Much of this is originally provided by Linaro, and you will probably see their name mentioned a lot. An example of a more recent Linaro is here:

An explanation by Linaro is here:

The cross tool chain is usually the compiler and various related compiler tools. The libraries being linked and the tools doing the link are a combination of the "sysroot"and “runtime”.

Generally speaking, libraries and tools to use them have a major release version and are more or less compatible even if you have minor variations. A published sysroot/runtime will be a bare minimum of actual libraries, and you’ll find a lot of frustration for complicated user space programs since they probably require more libraries than what a generic sysroot/runtime provide. As such, one can copy the libraries and support files (e.g., headers, but not executables like the linker) directly from an installed system and it should work.

If you have a 32-bit armhf root filesystem, then you can often use this instead of the libraries from cross tool chains. One source of this for Jetsons is to clone an actual Jetson. Another option is to use the sample root filesystem which is used to flash a Jetson.

If you go to a particular L4T release page, e.g., R21.8 for armhf/TK1, then the “driver package” is for command line flashing (or preparation of the image to be flashed). The “sample root filesystem” is purely the Ubuntu which would otherwise be flashed. If you unpack the “driver package” as a regular user, then this produces the “Linux_for_Tegra/” subdirectory, and an empty “Linux_for_Tegra/rootfs/” directory. You can then unpack the “sample rootfs” into “Linux_for_Tegra/rootfs/”, and most of this content is what could be used to replace libraries which might otherwise be from the limited runtime/sysroot of a cross development environment. Do know that to get an actual full set of valid libraries from the “Linux_for_Tegra/rootfs/” directory you will need to go to the “Linux_for_Tegra/” directory and run “sudo ./” (this installs the NVIDIA-specific libraries and drivers into the “rootfs/” and turns “Ubuntu” into “L4T”).

A clone is superior since you can update from a Jetson, install extra packages, e.g., CUDA, so on. The basic system in “rootfs/” does not include optional packages, and this is where JetPack/SDK Manager comes in. JetPack/SDKM is just a front end to the driver package and sample rootfs during a flash, but after flash this is used to install CUDA and other optional “goodies” through ssh to the Jetson. There are tar archives within the unpacked “driver package” which can be used to install some of the content, but it complicates life to not use JetPack/SDKM. JetPack/SDKM does download a repository.json file which contains URLs for downloading the optional content, but this requires more knowledge and effort.

In some cases QEMU can be used to emulate the armhf environment and directly run the content of the “rootfs/”, e.g., to run its “apt update” or other binaries directly, but this in turn takes even more knowledge and effort (this is definitely not just plug-n-play).