Jetson TX1 Kernel Compilation

I just recently bought Jetson TX1 development kit.
I need to add soc_camera driver into kernel. So i need to compile the kernel.

So whats the best way to compile the kernel. In forum i found 2 ways

  1. compile the kernel on the development kit itself
    https://devtalk.nvidia.com/default/topic/762653/?comment=4654303

  2. compile on the HOST linux pc.
    The instructions are contained within the L4T documentation, available here: http://developer.download.nvidia.com/embedded//L4T/r23_Release_v1.0/Tegra_Linux_Driver_Package_Documents_R23.1.1.tar
    After extracting the tarball, open l4t_getting_started.html and check out the “Building the NVIDIA Kernel” section.

So which is the best approach?
And is there any updated guide for compilation?

And if the second approach is best, whats the environment variable for the Ubuntu 14.04 64 bit PC?

The first method is for a JTK1, which is not applicable to a JTX1. The second method you list is correct for a JTX1. R23.2 is out and should be used, but instructions should be the same as for R23.1.

I use cross-compilers from the most recent Linaro tools, found here:
https://releases.linaro.org/components/toolchain/binaries/latest-5/

You would use the aarch64-linux-gnu and arm-linux-gnueabihf. Environment would be like this, but adjusted to where you put the compilers:

export CROSS_COMPILE=/usr/local/gcc-linaro-5.2-2015.11-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-
export CROSS32CC=/usr/local/gcc-linaro-5.2-2015.11-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-gcc
export TEGRA_KERNEL_OUT=/home/whoever_i_am/jetson/build
export ARCH=arm64

Thanks for the Reply.

I have downloaded gcc-linaro-5.3-2016.02-x86_64 version instead of gcc-linaro-5.2-2015.11-x86_64 as this is the newest version.

So does it make any difference? or i need to download gcc-linaro-5.2-2015.11-x86_64 version only?

And also i am getting the “Fix ‘error: r7 cannot be used in asm here’ problem” error.
So i followed the solution of below link
https://devtalk.nvidia.com/default/topic/901677/jetson-tx1/building-tx1-kernel-from-source/post/4749845/#4749845

So is this right? or i need to do something else?

I am using linaro 5.2 and following the link you mentioned and kernel get compiled successfully

The version I listed is just the one I used on last download when I made notes. This does mean I tested with this version and it worked correctly. Typically the Linaro tools undergo a lot of work at making better arm assembler, so more recent versions tend to do better at some aspect. Since I tested under 5.2, I suspect the most recent 5.2 might be best…I’m unsure of what changes took place in 5.3, but if user space is compatible, then 5.3 could be ok as well.

Some of the most significant changes to Linaro tools is consideration by the tools of what is an error or warning, along with trying to generate better assembler for arm. As errors and warnings crop up which did not get looked at previously changes are examined to see if the next version should change the error/warning, or if perhaps assembler should be changed, and finally if the new error/warning should remain as an error/warning.

Since 5.2 has been tested and known working, and since 5.3 failed for you, I would suggest using the most recent 5.2 (there should not be any need to use the specific release date version of 5.2, just use the latest 5.2).

I am also getting “error: r7 cannot be used in asm here’ problem” & “tegra clock error” for 5.2 version also.

So i there problem with my HOST environment?

But after fixing the error as per below link i can successfully compile.
https://devtalk.nvidia.com/default/topic/901677/jetson-tx1/building-tx1-kernel-from-source/post/4749845/#4749845

and another problem,
If i try to insert generated .ko to my kit. I am getting following error
“insmod: ERROR: could not insert module ov5693_v4l2.ko: Invalid module format”

i think this is coming due to kernel version mismatch. Right?
And i need to update zImage also to the kit?
or any other issue?

If you go to your module directory where the file ov5693_v4l2.ko is, what is the output of:

file ov5693_v4l2.ko

Please find insmod and file output of ov5693_v4l2.ko build in HOST PC using 5.3 linaro cross compiler.

ubuntu@tegra-ubuntu:~$ sudo insmod ov5693_v4l2.ko 
insmod: ERROR: could not insert module ov5693_v4l2.ko: Invalid module format
ubuntu@tegra-ubuntu:~$ file ov5693_v4l2.ko 
ov5693_v4l2.ko: ELF 64-bit LSB  relocatable, ARM aarch64, version 1 (SYSV), BuildID[sha1]=e9ab901ce90f82fcd8f0c3ff1ccb99a5b67da51a, not stripped

And same file output for prebuild ko file in /lib/module/…

root@tegra-ubuntu:/lib/modules/3.10.67-g458d45c/kernel/drivers/media/platform/soc_camera# file soc_camera_platform.ko 
soc_camera_platform.ko: ELF 64-bit LSB  relocatable, ARM aarch64, version 1 (SYSV), BuildID[sha1]=1400b81c60600e9cbe451662fde3313d196b38bc, not stripped

But i can add this soc_camera_platform.ko successfully.

Find solution…

I updated zImage and modules in “Linux_for_Tegra/rootfs/” and flash the image using

sudo ./flash.sh jetson-tx1 mmcblk0p1

Now i can insert any module compiled in HOST PC.
I found that my kernel version that i build on HOST PC is “3.10.67”. And on Jetson it was “3.10.67-g458d45c”. But after flashing new image i got another “3.10.67” version in /lib/modules/ with “3.10.67-g458d45c”.

So is this right?

I could be wrong but I guess the module names is dictated by the name of your kernel_headers. Makesure kernel_headers name match up with you target kernel name.

The base version of “uname -r”, e.g., “3.10.67”, is from the kernel source.

The suffix version of “uname -r”, e.g., “-g458d45c”, is from the .config file’s CONFIG_LOCALVERSION (usually set up in “make menuconfig”, but can be edited directly).

Base version of kernel plus CONFIG_LOCALVERSION give “uname -r”, e.g., “3.10.67-g458d45c”. Modules are searched for in “/lib/modues/${uname -r}”.

Assuming you want to output kernel builds to “/home/me/build”, and to output module installs (meaning what would end up copied over to Jetson, not intermediate build files) to “/home/me/modules”, you might do something like this:

mkdir /home/me/build
mkdir /home/me/modules
export TEGRA_KERNEL_OUT=/home/me/build
export TEGRA_MODULES_OUT=/home/me/modules

If your compiler is linaro-5.2-2015.11 (one of several compilers I have), and installed to /usr/local (which is where I put pre-built non-rpm-packaged binary installs), you would set up more environment like this:

export CROSS_COMPILE=/usr/local/gcc-linaro-5.2-2015.11-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-
export CROSS32CC=/usr/local/gcc-linaro-5.2-2015.11-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-gcc
export ARCH=arm64

You would then build something like this (note that “O=” can be used in any step…mixing this up can result in confusion)…note that not all steps are always needed:

cd ...whereever your L4T kernel source is...
make mrproper
make O=$TEGRA_KERNEL_OUT mrproper
make O=$TEGRA_KERNEL_OUT tegra21_defconfig
make O=$TEGRA_KERNEL_OUT menuconfig
...adjust to your settings, e.g., the LOCAL version...
make O=$TEGRA_KERNEL_OUT zImage
make O=$TEGRA_KERNEL_OUT dtbs
make O=$TEGRA_KERNEL_OUT modules
make O=$TEGRA_KERNEL_OUT modules_install INSTALL_MOD_PATH=$TEGRA_MODULES_OUT

You’ll get a lot of files in “$TEGRA_KERNEL_OUT”, some of which you’ll use directly. Within “TEGRA_MODULES_OUT" is where it gets interesting and where you can test if your CONFIG_LOCALVERSION is what you expect. There will be a subdirectory "lib/", and within that "firmware/" and "modules/". Firmware is not dependent upon "uname -r", so if the firmware was already on Jetson, that subdirectory can be ignored. The "modules/" subdirectory though must follow the "uname -r" scheme mentioned above. If you have added suffix "-g458d45c", then there should exist "{TEGRA_MODULES_OUT}/lib/modules/{uname -r}/". Installing the kernel from "{TEGRA_KERNEL_OUT}” implies that the “${TEGRA_MODULES_OUT}” “uname -r” will be used for finding modules.

Note that although various compilers and kernel source features will result in different errors or warnings, some steps are independent and those errors or warnings might be ignorable. You will always want to start with mrproper for pristine setup, and put your “.config” manually or via tegra21_defconfig and menuconfig; after this if base configuration is the same you could go straight to modules and modules_install…make zImage could crash and burn but if you are not putting in the base kernel, and only working with modules, then you probably don’t care that zImage fails. Assuming zImage did fail, then it might be due to changes in compilers which previously did not believe the issue was an error…or it might be there is some change in configuration which means something which was not previously compiled for lack of need is now getting tested. It requires knowing about the specific configuration and compiler combination.

Just to emphasize, if you use “O=${TEGRA_KERNEL_OUT}”, then this is where the “.config” is which you care about. If you run mrproper and use “O=” anywhere, then your mrproper and other commands must also have the “O=”.

I put my working notes here:

https://devtalk.nvidia.com/default/topic/930642/how-to-compile-tegra-x1-source-code/

they are based in several of the posts

Great Guide… Thanks…

It may look like a lot of additional work, but setting up a cross-compiling environment is well worth the effort.

I’ve set up mine on an Amazon EC2 instance with 32 cores. Seeing “make -j 48” compile everything in almost no time is fun! Just don’t forget to stop your instance after you are done, it can get expensive.