Failed to start Load Kernel Modules when building a character driver

I want to include a character driver into the kernel, but I’m not able to. I built a simple helloworld driver named “helloworld_char_driver.c”. I downloaded the kernel sources for R32.1, and modified the following:

  • /kernel/kernel-4.9/drivers/char/helloworld_char_driver.c (File created)
  • /kernel/kernel-4.9/drivers/char/Makefile
obj-$(CONFIG_HELLOWORLD)    += helloworld_char_driver.o
  • /kernel/kernel-4.9/drivers/char/Kconfig
config HELLOWORLD
    tristate "My simple helloworld driver"
    default n
    help
      The simplest driver

After this, I built the image as follows:

export TEGRA_KERNEL_OUT=/home/julen/kernel_src/kernel_out
export CROSS_COMPILE=aarch64-linux-gnu-                    # Using GCC Linaro 7.3.1
cd /home/julen/kernel_src/kernel/kernel-4.9
make ARCH=arm64 O=$TEGRA_KERNEL_OUT tegra_defconfig
make ARCH=arm64 O=$TEGRA_KERNEL_OUT menuconfig             # I activate the driver here manually)
make ARCH=arm64 O=$TEGRA_KERNEL_OUT Image -j12

Once the image is successfully built, I copy the Image into the /boot directory of the Tegra, replacing the existing Image. Once copied, I reboot the system, but it does not boot. Prints the following error while booting:

[FAILED] Failed to start Load Kernel Modules

What am I doing wrong? As far as I’ve read, If i just build the Image, I don’t need to flash the kernel, just replace the Image and reboot.

If you build a module, then you should not replace the Image…only the module.

With regard to modules, if you boot any Linux system, then the output of “uname -r” is a combination of the base kernel version, plus a suffix. That suffix is set by humans at kernel compile time. The default for a Jetson is “-tegra”. The line in the “.config” file for this would be:

CONFIG_LOCALVERSION="-tegra"

If your base kernel version is “4.9.140”, and CONFIG_LOCALVERSION is “-tegra”, then “uname -r” becomes “4.9.140-tegra”.

Modules are searched for here:

/lib/modules/$(uname -r)/

What does this kernel’s “uname -r” show up as?

If you changed this, and did not also install all modules, then all modules will be missing. The existing modules will be in the wrong spot due to lack of the CONFIG_LOCALVERSION suffix matching the previous kernel. Make that change to correct CONFIG_LOCALVERSION, and then try again.

FYI, when you add a feature via module, then typically you won’t rebuild the whole kernel. When you change a kernel’s base configuration with some non-module change, that is when you probably want to rebuild the kernel and all modules. Not all features can be modules, and so for example adding or removing the ability to use swap space virtual memory would be a case where you would rebuild everything since swap space can’t be built as a module. Your hello world module would not require the entire kernel to be rebuilt since it has no dependencies on integrated features. If you built a module, then the Image itself has no knowledge of this and you would never find the hello world module.

I’ve tried to build the image with the Linux_for_Tegra/ folder generated when flashing the TX2 with the SDK Manager (The one stored in $HOME/nvidia/nvidia_sdk/JetPack_4.2_Linux_P3310), and without addind the driver, so just the folder as it is, and yet the error shown in the boot process is the same, so probably the error comes with how I generate the image.

Regarding the CONFIG_LOCALVERSION, I opened the configuration file kernel/kernel-4.9/arch/arm64/configs/tegra_defconfig, and it shows the following:

# CONFIG_LOCALVERSION_AUTO is not set

Moreover, uname -r shows the following:

4.18.0-21-generic

I’m building the image in the host system (Ubuntu 18.04). Should it be built in the target?

UPDATE: Fixed!
I checked for the cross-compiler, which I downloaded from here (https://devtalk.nvidia.com/default/topic/1048951/jetson-nano/nvidia-tegra-linux-driver-package-needs-details/post/5323490/#5323490), but digging up a bit I realised it was for a 32bit system. As I’m working on a 64bit system, and couldn’t find the 64-bit x86_64-aarch64 cross_compiler, I downloaded the 6.4.1 linaro one (https://devtalk.nvidia.com/default/topic/1049380/jetson-agx-xavier/error-compiling-public_source-kernel/post/5326106/#5326106), and also edited from menuconfig the CONFIG_LOCALVERSION attribute to “-tegra”. Now, the kernel compiles and TX2 boots up, and the configuration shows up in /proc/config.gz (CONFIG_HELLOWORLD=y), which is enough for now. Thanks!

FYI, if you want the complete kernel source, then the best place to get it is via the “source_sync.sh” script in the “Linux_for_Tegra/” subdirectory. This will run on any system. An example to get the full kernel source, including out of tree content, under R32.1, is:

./source_sync.sh -k tegra-l4t-r32.1

I mention this because 4.18.0 sounds incorrect for L4T.

But if I’m using L4T 32.1 sources (kernel 4.9) and 6.4.1 Linaro cross-compiler, does it matter the kernel version of the system I’m building the image at? “uname -r” at TX2 shows “4.9.140-tegra” and “4.18.0-21-generic” at host Ubuntu 18.04 system.

You are correct…it sounded like 4.18.0-21-generic was on the Jetson. The version on the host PC won’t matter if the base source is 4.9.140. “uname -r” embeds the base version as the prefix, and the CONFIG_LOCALVERSION as the suffix.