I don’t know yet if it works, as I’m testing right now, but you may want to try the most recent linaro tool chain:
https://releases.linaro.org/components/toolchain/binaries/latest-5.2/
You’ll need both the aarch64-linux-gnu and also arm-linux-gnueabihf. The aarch64 is the 64-bit, while gnueabihf is the 32-bit variant for some legacy stuff. Kernels will compile with just the gcc-linaro, the runtime and sysroot shouldn’t be required for kernels (regular applications in user space would need this). On my system I configured with “./configure --prefix=/usr/local/gcc-linaro-5.2-2015.11-x86_64_aarch64-linux-gnu” and “./configure --prefix=gcc-linaro-5.2-2015.11-x86_64_arm-linux-gnueabihf”.
The R23.1 documents have a section in them for kernel cross-compile, I’m restating it here for convenience:
To build the Tegra Kernel
1. Export the following environment variables:
$ export CROSS_COMPILE=<crossbin>
$ export CROSS32CC=<cross32bin>gcc
$ export TEGRA_KERNEL_OUT=<outdir>
$ export ARCH=arm64
Where:
•<crossbin> is the prefix applied to form the path to the tool chain for cross compilation targeting arm64, e.g., gcc. For a Linaro tool chain, it will look something like:
<linaro_install>/aarch64-unknown-linux-gnu/bin/aarch64-unknown-linux-gnu-
Note: This example requires GCC 4.9 or above.
•<cross32bin> is the prefix applied to form the path to the tool chain for cross compilation targeting arm32, e.g., gcc. For a CodeSourcery tool chain, it will look something like:
<csinstall>/arm-2009q1-203-arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-
And CROSS32CC would be:
<csinstall>/arm-2009q1-203-arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc
Note: This example requires GCC 4.7 or above.
•<outdir> is the desired destination for the compiled kernel.
2. Execute the following commands to create the .config:
$ cd <myworkspace>/<kernel_source>
$ mkdir $TEGRA_KERNEL_OUT
Where <kernel_source> directory contains the kernel sources.
•For Tegra X1, Jetson TX1, use:
$ make O=$TEGRA_KERNEL_OUT tegra21_defconfig
Where <myworkspace> is the parent of the Git root.
3. Execute the following commands to build the kernel:
$ make O=$TEGRA_KERNEL_OUT zImage
4. Execute the following command to create the kernel device tree components:
$ make O=$TEGRA_KERNEL_OUT dtbs
5. Execute the following commands to build the kernel modules (and optionally install them)
$ make O=$TEGRA_KERNEL_OUT modules
$ make O=$TEGRA_KERNEL_OUT modules_install INSTALL_MOD_PATH=<your_destination>
6. Copy both the uncompressed (Image) and compressed (zImage) kernel images over the ones present in the ‘kernel’ directory of the release.
7. Archive the kernel modules created in Step 4 using the tar command and the filename that is used for the kernel modules TAR file in the same kernel directory of the release. When both of those TAR files are present, you can follow the instructions provided in this document to flash and load your newly built kernel.
This post shows a workaround for a bug which would otherwise halt compile:
https://devtalk.nvidia.com/default/topic/894945/jetson-tx1/jetson-tx1/post/4744666/#4744666
The mix of 32-bit and 64-bit is the reason why you need a general “CROSS_COMPILE” pointing to a directory plus name prefix, plus “CROSS32CC” pointing directly at the 32-bit gcc executable full path. The TEGRA_KERNEL_OUT is just a temp scratch directory to work in to keep compiles and configs separate from the source (just set this and use “make O-$TEGRA_KERNEL_OUT …” anywhere you have set up for scratch space).
Don’t forget under General Setup to add to the Local version, e.g., “-g3a5c467”.