Kernel customization build, where to put generated files?

I’m running through the Kernel customization found here . . .
https://docs.nvidia.com/jetson/archives/r34.1/DeveloperGuide/text/SD/Kernel/KernelCustomization.html

Under step 5 in the Building the Kernel section after running the nvbuild.sh script, it shows to replace the nvgpu.ko file to this location . . .
Linux_for_Tegra/rootfs/usr/lib/modules/$(uname -r)/kernel/drivers/gpu/nvgpu

I cannot find this path.

First, there is no rootfs directory. I then assumed that and found instructions on manually creating a rootfs directory and downloading a Sample root filesystem and extracting it into the rootfs directory. That seemed to give me most of the directory structure described above except there was no module directory after traversing Linux_for_Tegra/rootfs/usr/lib.

So, I’m at a loss as to where to add the nvgpu.ko file.

Just to clarify how things work here.

This document is doing cross compile. Which means you are compiling the kernel on a x86 host PC but not Jetson.

And the other points to clarify are generally a Jetson would be flashed by a x86 host PC.
You should do that in the first day you get a Jetson device.
The Linux_for_Tegra thing is the BSP package that exist on the host PC. Also not on the Jetson itself.

The next step depends on whether you have ever flashed the board before.

Thanks for your reply. I’m obviously new to embedded systems. I’ve been tasked to understand how to cross compile for a target system (aka Jetson Orin) while on a different Linux machine (Ubuntu in my case). I do not have the Jetson board hardware, most likely I’ll be doing testing running in a docker container.

It’s been difficult finding tutorials. I’ve just been muddling my way through. Would you know of any tutorials that would describe the process . . .

  1. build kernel module
  2. put that .ko file in correct path
  3. emulate the Jetson Orin with the new .ko file in a docker container

If I understand correctly, this is basically what I’m trying to figure out and understand.

Unfortunately, there is no such thing supported.

I have downloaded Tegra_Linux_Sample-Root-Filesystem_R35.1.0_aarch64.tbz2 and run in a docker container.

And I have followed the instructions successfully to create the .ko file as described above.

I’m assuming I just need to make the connection or link that brings these 2 together.

Some concepts that will help…

The command “uname -r” is a result of a query from the kernel for whatever kernel is running. The left part of the reply is from the source code release version, while the right side (if any) is from the configuration CONFIG_LOCALVERSION (a string) at the time of compile. As an example, if “uname -r” replied with “5.15.0-tegra”, then the kernel source is release 5.15.0, and during compile of that kernel, “CONFIG_LOCALVERSION” had been set to “-tegra”.

The modules of a running kernel will always be searched for at this location:
/lib/modules/$(uname -r)/kernel/

Thus, in that example, modules would be somewhere under:
/lib/modules/5.15.0-tegra/kernel/

If source for the kernel is called TOP, and if you’ve set an environment variable to this example location on the host PC for the cross compile:

  • ~/kernel_source/
  • export TOP=~/kernel_source/

And if your subdirectory within that tree for some example driver is at “drivers/i2c/something.ko”, then you would place that file (if doing so manually) at:
/lib/modules/5.15.0-tegra/kernel/drivers/i2c/something.ko

You’d then want to run “depmod -a”, and possibly reboot.

If on the host PC you were to run compile target “modules_install”, then this would be incorrect unless you’ve set up an alternate module directory location. Without that setup all modules would be placed in the actual “/lib/modules”, but this is the host PC, not the Jetson. You could modify your compile output for this. As an example, let’s pretend your desired module output location is:
~/modules_out

You could:

export TEGRA_MODULES_OUT=~/modules_out
make ...options... INSTALL_MOD_PATH=$TEGRA_MODULES_OUT modules_install

With that you’d get the entire tree of modules , including paths, put in “~/modules_out/”. If desired, you could then tar that up from the right subdirectory, copy it to the Jetson, and untar it in the right location. If you were natively compiling on the Jetson, then you’d run the modules_install without any export and it would just go to the right place.

IMPORTANT: You will see install steps in docs usually listed for how to put the modifications in the flash software of the host PC. Then, the intent is to have this modification during flash. You don’t usually need to flash. The choice is in part whether or not you want to do this in a production line versus on a working unit you are working on.

IMPORTANT: If you do not change any kernel configuration other than that of adding modules, you can reuse the kernel itself (the Image file). As soon as you change CONFIG_LOCALVERSION or an integrated feature (selected with y instead of m in the config editor of an otherwise matching config) you likely need to install a new Image and all of the modules, not just the module you are interested in. In that case you would also need to make sure you have a different CONFIG_LOCAVERSION so as to not overwrite or select modules in the original “uname -r” location. Example would be:
CONFIG_LOCAVERSION="-testing"
(then for that kernel, when installed, “uname -r” would be “5.15.0-testing” if we use our example kernel)

Doing it this way means the old kernel, if left in place, can still boot. You’d add a second boot entry in “/boot/extlinux/extlinux.conf” for the new kernel, perhaps naming the kernel Image file for the new kernel something different, e.g., “Image-testing” (it is good to name it after the CONFIG_LOCALVERSION).

There are usually default kernel targets up through the L4T R35.x you could use, which is “tegra_defconfig”. If you’ve ever modified the existing Image, then this is no longer accurate. The file “/proc/config.gz” is a reflection in RAM of the running kernel’s configuration except that it does not show the running CONFIG_LOCALVERSION. As long as you copy this somewhere else, gunzip it, rename it to .config, and properly set the CONFIG_LOCAVERSION, this latter method will always be an exact match to the current kernel even if the kernel was modified.

Remember: Changing an integrated feature requires a lot of work, simply adding a module does not. Official docs though will be aimed at putting that content into the flash software rather than just copy directly to the Jetson.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.