Error compiling kernel for TX2

I have a TX2 board runing ubutu 18.04 LTS. I am trying to kernel compile it.
I am following this post here

Everything works fine until this command
make ARCH=arm64 O=$TEGRA_KERNEL_OUT -j4

which gives this error

 make ARCH=arm64 O=$TEGRA_KERNEL_OUT -j4
make[1]: Entering directory '/home/admin123/jetson_tx2/kernel/kernel-4.9/jetson_tx2_kernel'
arch/arm64/Makefile:49: LSE atomics not supported by binutils
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
/lib64/ld-linux-x86-64.so.2: No such file or directory
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
/lib64/ld-linux-x86-64.so.2: No such file or directory
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
/lib64/ld-linux-x86-64.so.2: No such file or directory
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
/lib64/ld-linux-x86-64.so.2: No such file or directory
  CHK     include/config/kernel.release
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
/lib64/ld-linux-x86-64.so.2: No such file or directory
  GEN     ./Makefile
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
/lib64/ld-linux-x86-64.so.2: No such file or directory
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
/lib64/ld-linux-x86-64.so.2: No such file or directory
  CHK     include/generated/uapi/linux/version.h
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
/lib64/ld-linux-x86-64.so.2: No such file or directory
  CHK     include/generated/utsrelease.h
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
/lib64/ld-linux-x86-64.so.2: No such file or directory
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
/lib64/ld-linux-x86-64.so.2: No such file or directory
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
/lib64/ld-linux-x86-64.so.2: No such file or directory
  Using .. as source for kernel
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
/lib64/ld-linux-x86-64.so.2: No such file or directory
  HOSTCC  scripts/kallsyms
  CC      scripts/mod/empty.o
  HOSTCC  scripts/pnmtologo
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
  HOSTCC  scripts/conmakehash
/lib64/ld-linux-x86-64.so.2: No such file or directory
../scripts/Makefile.build:335: recipe for target 'scripts/mod/empty.o' failed
make[3]: *** [scripts/mod/empty.o] Error 255
../scripts/Makefile.build:649: recipe for target 'scripts/mod' failed
make[2]: *** [scripts/mod] Error 2
make[2]: *** Waiting for unfinished jobs....
/home/admin123/jetson_tx2/kernel/kernel-4.9/Makefile:579: recipe for target 'scripts' failed
make[1]: *** [scripts] Error 2
make[1]: Leaving directory '/home/admin123/jetson_tx2/kernel/kernel-4.9/jetson_tx2_kernel'
Makefile:171: recipe for target 'sub-make' failed
make: *** [sub-make] Error 2

/lib64/ld-linux-x86-64.so.2: No such file or directory
seems to be the error i tried installing required library with no solution

also, this is been done on the TX2 board itself .

Can I verify this is cross compiled? The ARCH is correct for this, but you shouldn’t use this if natively compiled.

Also, don’t build a kernel with no arguments to make: The ARCH and O=... are options, and you can use options any time, but we need a specific target. You will want to specifically first build Image. Then, and only then, build modules. Also, I don’t see the configuration steps, which are mandatory before build (perhaps you used exactly the steps from that URL, but it is good to be thorough and mention exact setup steps before figuring out what is wrong). Regardless, once you have an empty output directory you’ve configured for, and when you are getting ready to actually build, then you would start with something like this:

make ARCH=arm64 O=$TEGRA_KERNEL_OUT -j 4 Image
make ARCH=arm64 O=$TEGRA_KERNEL_OUT -j 4 modules

Please note that configuration steps prior to build, in which configuration is set up, are where most failures come from.

Hello, what do you mean by natively compiled or cross compiled ?
I am doing the compilation on the TX2 board itself that would mean natively right?

for the steps following this same guide

i have successfully

  1. Install the required updates
  2. Download the required files
  3. Extract the files
  4. Apply the real-time kernel
  5. Configure the kernel

The error comes when compiling
I have tried the suggestions you gave

make ARCH=arm64 O=$TEGRA_KERNEL_OUT -j 4 Image
make ARCH=arm64 O=$TEGRA_KERNEL_OUT -j 4 modules

But still got the same error i shared previously

Yes, compiling on the same type of platform that the code goes to is native compile. If you were to build the software on a desktop PC using cross tools for the different architecture, then that would be cross compiling.

Whether you need a specific tool chain or not just depends on circumstances. Some of the code the TX2 uses, especially for boot, is old (boot seems to stick to 32-bit ARM, or else depend on feature sets of older compilers). Not all of that code requires older compilers, some of it works fine with the native tools on the TX2 itself. Documentation which NVIDIA provides is usually for cross compile tools from a PC, but other than being configured for cross compile, the versions listed by NVIDIA work also on native compile. As soon as you add the RT patches the required compiler tool chain might change (I don’t know for sure, but it is then possible the patches themselves require a different release of tool).

Please note that there are a lot of configuration steps needed before you get to the make Image step. One of the points of make of “Image” prior to make of “modules” is that the kernel configuration must be propagated throughout the source code in some cases. One of those cases is that if you configure and then directly compile modules, then it will fail because modules needs that configuration to be propagated. When you build Image there is no need to propagate this because the kernel’s Kconfig system knows this is required for Image. If you then build modules, directly after building Image, then you have no problem propagating configuration. However, if you configure, and then build modules without building Image, there is a missing configuration propagation. To fix that one would need the make target of modules_prepare. If you build Image, then modules_prepare, you have duplicated something which has already occurred (not harmful, but not needed).

I have this question: Why did you include “ARCH=arm64”? This is used for cross compile, and you said you are building natively (directly on the Jetson, not with cross tools on a PC). This will trigger some alternative methods for looking for tools, and this is more likely to cause an error if you name the ARCH while natively compiling.

Note also that problems with ld-linux... are linking issues. These do sometimes occur when using the wrong tool. It is possible your compiler is the wrong release to directly work, but you don’t really know this since you used ARCH (which can alter the situation). Try without ARCH. Use a completely new $TEGRA_KERNEL_OUT empty directory and configure from scratch.

If you have ever built directly built in the kernel source, then make sure you have reverted any changes directly in the source tree with “make mrproper”. You would do that without any “O=$TEGRA_KERNEL_OUT” as it is the source itself you want pristine. From then on all commands use the “O=$TEGRA_KERNEL_OUT” and this means your source itself remains pristine and only the temp location via $TEGRA_KERNEL_OUT will change or be altered.

I realized the mistake, i was doing the step of cross compilation on the actual device.
here’s what i have done natively now i have downloaded the public_source from
https://developer.nvidia.com/embedded/l4t/r32_release_v7.2/sources/t186/public_sources.tbz2

I have unziped and apply the kernel, then i build with make -j4
Now i am here.
I have this as a result of the make command

:~/kernel/kernel-4.9/arch/arm64/boot$ ls
dts Image Image.gz install.sh Makefile zImage

now how do i copied this or what should i copy to the boot other than the image
here’s how the boot directory looks(I made a copy of the image there already)

this is the content of extlinux.config
if i replace the image what about the dbt’s?
what about the INITRD?

I tried this before and i broke the device( black screen after restart).

TIMEOUT 30
DEFAULT primary

MENU TITLE L4T boot options

LABEL primary
      MENU LABEL primary kernel
      LINUX /boot/Image
      INITRD /boot/initrd
      APPEND ${cbootargs} quiet root=/dev/mmcblk0p1 rw rootwait rootfstype=ext4 console=ttyS0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 isolcpus=1-2 

# When testing a custom kernel, it is recommended that you create a backup of
# the original kernel and add a new entry to this file so that the device can
# fallback to the original kernel. To do this:
#
# 1, Make a backup of the original kernel
#      sudo cp /boot/Image /boot/Image.backup
#
# 2, Copy your custom kernel into /boot/Image
#
# 3, Uncomment below menu setting lines for the original kernel
#
# 4, Reboot

# LABEL backup
#    MENU LABEL backup kernel
#    LINUX /boot/Image.backup
#    INITRD /boot/initrd
#    APPEND ${cbootargs}

Install can vary greatly depending on circumstances. The official docs concentrate only on updating the flash software, and then flashing the Jetson.

If you are directly installing to the Jetson, and not flashing, then there are several questions which must first be answered:

  • Did you set CONFIG_LOCALVERSION, and if so, was it set to the string “-tegra”?
    • This changes many details, including the output of the command “uname -r”. CONFIG_LOCALVERSION is the suffix to that command, and modules are searched for at “/lib/modules/$(uname -r)/kernel/”. If the source version changes (the prefix) or the CONFIG_LOCALVERSION changes, then the kernel won’t search for modules in the same place it used to search. You’d have to install all modules and the Image (the modules would be a new location, and you’d want to leave the old Image and modules as a backup).
    • If “uname -r” remains the same after you’ve built modules, and if the configuration in the source code was a match for the existing/running kernel (other than modules), then there is just a simply copy of any new module to the correct subdirectory of “/lib/modules/$(uname -r)/kernel/” (the subdirectory matches the location of the kernel source where that module is at).
    • If “uname -r” has not changed, but you’ve created a new Image file which did not start with a matching integrated feature list (features/symbols are are integrated and built in via “=y”, or else modular with '=m"), then this kernel is likely incapable of loading any of the old modules. Installing modules would require replacing all modules in the original module directory. This is undesirable as it means your old Image can’t be booted with those modules, they’ve become invalidated.

Note that once upon a time Jetsons used the zImage, which is compressed, but no longer use this. I think the reason is that when zImage was used it was a 32-bit ARM CPU, and when 64-bit came out, the boot chain lost the ability to always decompress this correctly.

No i did not set the CONFIG_LOCALVERSION, i checked the make menuconfig
it has not been appended as well

and the current result for this command is uname -r
4.9.337-tegra
I don’t know if that has changed after i build the module or not.
I will start the step again with the prefix

If you want to build a module to load in this existing kernel (by far the easiest way to go), then you must set CONFIG_LOCALVERSION to:

CONFIG_LOCALVERSION="-tegra"

Otherwise the module is not going to load. There may be other configuration issues, but this is the most common mistake in building modules which won’t load. A related issue is if this is not set to something different and you install a new kernel Image file; the new Image wouldn’t be able to load the old modules. Start by using “-tegra” and installing only modules you have worked on building (you still need to either build Image before building modules, which propagates the config throughout the source, or build target modules_prepare to do the same thing faster).

I successfully compiled the RT Kernel on Device(TX2), here’s the steps I followed

Step 1: Create a Working Directory

Create a new folder and navigate into it:

mkdir jetson_tx2 && cd jetson_tx2

Step 2: Download the Kernel Source

Use wget to download the kernel source for your Jetson TX2:

wget https://developer.nvidia.com/embedded/l4t/r32_release_v7.2/sources/t186/public_sources.tbz2

Step 3: Extract the Source

Extract the downloaded source:

sudo tar -xjf public_sources.tbz2

Step 4: Extract the Kernel Source

Navigate to the kernel source and extract it:

tar -xjf Linux_for_Tegra/source/public/kernel_src.tbz2

Step 5: Apply Real-Time Patches

Go into the kernel source directory and apply the RT patches:

cd kernel/kernel-4.9/
./scripts/rt-patch.sh apply-patches

Step 6: Copy Current Kernel Configuration

Copy the configuration of the currently running kernel:

zcat /proc/config.gz > .config

Alternatively, start with the default Jetson kernel configuration:

make tegra_defconfig

Step 7: Set CONFIG_LOCALVERSION

Run the menu configuration tool:

make menuconfig
  • Navigate to General Setup → Local version - append to kernel release.
  • Set the value to -tegra.

Step 8: Configure Kernel for Real-Time

Within the menuconfig tool, adjust the following settings:

  1. General Setup → Timer subsystem → Timer tick handling:
  • Enable Full dynticks system (tickless).
  1. Kernel Features → Preemption Model:
  • Select Fully Preemptible Kernel (RT).
  1. Kernel Features → Timer frequency:
  • Set 1000 HZ.

Step 9: Build and Install the Kernel

Compile the kernel and install the modules:

make -j$(nproc)
make modules
sudo make modules_install

Copy the new kernel image to /boot (after making a backup of the original image in /boot important):

sudo cp arch/arm64/boot/Image /boot/Image

Step 10: Reboot

Reboot your Jetson TX2 to load the new kernel:

sudo reboot

Verification

To verify that your Jetson TX2 is running a real-time kernel:

  1. Check Kernel Version:
uname -a

Look for rt in the kernel version (e.g., 4.9.253-rt168-tegra).
2. Verify Real-Time Configuration:

zcat /proc/config.gz | grep CONFIG_PREEMPT

Ensure CONFIG_PREEMPT_RT_FULL=y is enabled.

Thank you @linuxdev .

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.