There is a lot that could be said about “good practices”. In the case where you added a feature which is not in the form of a module, this might invalidate all modules against that new configuration.
Consider the output of the command “uname -r
”. The prefix is the source code version, and the suffix is the string from CONFIG_LOCALVERSION
. In a contrived example, if your kernel version is 5.15.0, and if your CONFIG_LOCALVERSION
is set like this:
CONFIG_LOCALVERSION=-tegra
…then “uname -r
” would be 5.15.0-tegra
.
A kernel looks for its modules at:
/lib/modules/$(uname -r)/kernel
If your “uname -r
” did not change, then it is looking for the same modules at the same location which the previous kernel used. Considering the Image
file itself changed, there is a strong chance modules will have issues loading. So in that case you are advised to use a different CONFIG_LOCALVERSION
(e.g., “CONFIG_LOCALVERSION=-pcie-ptm
”). Then the modules would be added to the subdirectories of (adjust for your actual kernel version) “/lib/modules/5.15.0-pcie-ptm
”. You could leave the old kernel in place as a backup which can load the old modules. The new kernel could be named something like Image-pci-ptm
as a reminder.
When you build modules for a kernel they are in subdirectories named after the source for that specific module or driver. A contrived example is that perhaps you have built (relative to the top of the kernel source) “drivers/net/gizmo.ko
”. In that case, assuming my example “uname -r
”, this would end up located here:
/lib/modules/5.15.0-pcie-ptm/kernel/drivers/net/gizmo.ko
I do not recommend having the kernel build directly place everything in their location (and in fact you cannot if cross compiling), but if you use the “O=/some/where
” build option, then intermediate output goes to that location (and configuration is read from that alternate location). You can combine this with the “INSTALL_MOD_PATH=/some/where/else/for/modules
”, and the entire tree of only what gets installed goes into that path. You can recursively copy this into the (example) /lib/modules/5.15.0-pcie-ptm/kernel
and the new kernel would find those and those would be compatible.
This is for native compile (be careful if you do this to have enough disk space), and a few other options are added for cross compile, but here is a useful cheat sheet (you can ignore the dtbs
and firmware in this case):
kern_setup.sh (2.3 KB)
The target in that script for installing modules gives you a replica of the tree of files as it would go into the “/lib/modules/$(uname -r)/kernel
”.
Just set that to executable, and get the notes via “./kern_setup.sh
”. That example goes into a lot of separate build locations and it isn’t really needed to split it up into all of those locations, but its purpose is to illustrate.
Note that one has to start with a configuration which matches the existing kernel, and only then do you edit to change the new configuration. If your Jetson is default to start with, then that can be via the “tegra_defconfig
” target. If not, then it can be from the “/proc/config.gz
” (after decompress and move to name “.config
”). In all cases you’d have to set the CONFIG_LOCALVERSION
because this is not set via either config.gz
or tegra_defconfig
.
Incidentally, if there are modules which are required for boot, then those have to go in the initrd
when using this (and Orins do, but many modules are not needed in that). If the module is required for reading the filesystem, and it is not integrated within the Image
, then you run into a bit of the “chicken and the egg” dilemma. The initrd
is how you get around that. For example, if your filesystem type is XFS, but you only have ext4
in the Image
, then you’d need to load the XFS module…and if the XFS module is itself on XFS, you have a problem. The initrd
is a very simple minimal root filesystem which more or less has only modules and firmware, and as its last command, it performs a pivot of the root filesystem to the actual filesystem, and starts executing init
there.