Some definitions or descriptions of how the kernel is arranged will probably help, so bear with me while I write [far] too much :P The confusion is common, so I’ll just write it down for other people as well.
The kernel itself can be either one monolithic set of code without any modules, or some parts of the kernel can be loaded or unloaded while the kernel runs…those are modules. Not all parts of the kernel can be unloaded/reloaded as a module, but many parts can. If I say integrated, then it implies “a feature enabled when configuring the kernel such that the feature is not a separate module”.
There are many kernel features. Some provide services or features which are not drivers, but which are used by some part of the kernel. As an example, if virtual memory is enabled to allow a swap file, then virtual memory is not a driver, but it does use the driver to the hard disk to read/write the swap file. If I say driver, then the driver might or might not be a module, but you can consider that a driver interacts with hardware and is not just some sort of support algorithm. Some modules are drivers, some are not. Some kernel features interact with hardware, some do not.
The bootloader has the job of setting up an environment for the Linux kernel, and then ending its own life by overwriting itself with the kernel. At the time of load the kernel may look at the inherited environment for clues on how to set itself up. Arguments will be passed to the kernel on the command line just like any other program (see “cat /proc/cmdline” and you’ll know what was passed as an argument).
Kernel arguments can either be from a set of standard arguments all kernels work with, or they might be something else only some custom part of the kernel understands. When the kernel loads every driver can see every command line argument, but arguments the component doesn’t know about are thrown away and ignored. If an argument is custom, but the driver sees the argument and is programmed to use that argument, then suddenly the argument has meaning. Some of the environment passed on is similar in that different parts of the kernel might find meaning in the environment, or might throw away environment.
Drivers can have arguments passed to them at the time they load (part of the “insmod” command). Integrated features only deal with arguments at the time the kernel loads and don’t really accept arguments the same way as loading(“insmod”)/unloading(“rmmod”)/reloading(“rmmod”+“insmod”) of a module does.
Command line arguments tend to be for features of a driver which are more or less an abstract concept, and although that concept may not exist outside of the driver, the argument is something generic that the driver will understand even if the hardware implementation is different. For example, a given chipset will use the same driver, but on different motherboards may be located at a different base address. It wouldn’t matter what the base address is, such arguments are valid regardless of this.
Then comes the part most people outside of the embedded world never hear about: The device tree. A long time ago drivers would require a different driver even with the same chipset if their base address or some other non-chipset-dependent never changed. E.g., one driver for the chipset on one motherboard brand, and a different driver for the same chipset on another motherboard brand. You might even make an adapter so a generic kernel driver for the chipset could load after the adapter adjusted for the different base address (or any such similar detail). Linux Torvalds became a bit upset about how fast the kernel was growing just for these simple details. So we got the device tree.
The device tree is yet another way of passing arguments to drivers. However, the device tree is special in that it has a unified mechanism to read the tree and a unified layout. Like command line arguments, if a driver does not know about the argument, then that part of the tree will simply be ignored. However, drivers of non-Plug-n-Play devices became responsible for dealing with custom base addresses and other details which differ even when the actual chipset remains constant. In addition, recall that I mentioned that the bootloader also sets up an environment prior to loading the kernel…the bootloader itself can use device tree content for setting up hardware and drivers prior to the kernel ever loading. This then gets passed on to the kernel as it loads, and drivers see whatever content applies to them, e.g., the driver will now know there is a serial port at an address via a “serial@12345678” type entry (the “12345678” is a contrived base address).
Device trees can be built entirely independently from the kernel source. Many people work on device trees without ever touching the kernel source. However, since given drivers which may or may not be configured to exist in a kernel also determines whether or not some piece of a device tree will matter you will find that can compile a device tree along with the kernel. Depending on which drivers you select, you will find different “.dtsi” device tree files are combined in order to create a single device tree. You could compile for various drivers you don’t need, and this would simply be ignored, but why bother?
In older releases the bootloader simply read a device tree file from the ext4 filesystem. In that case only U-Boot and Linux needed this. However, in more recent releases (which are gearing up for Secure Boot and redundancy), you will find the device tree migrated to a separate binary partition instead of the ext4 filesystem. This is because those stages of boot prior to U-Boot do not have an ext4 driver. Instead those earlier stages read directly from a partition, and even U-Boot inherits the tree which those earlier stages pass on to it…and then the kernel inherits from the bootloader.
As such, where you used to simply be able to copy a file into “/boot” for device tree changes, you must now use flash software to put the tree into a partition (and this includes cryptographic signing…if signing is not correct, then the content will be rejected).
For actual procedures, you must expect that anything you do with a kernel requires the kernel to be first configured. Often there is a “make” target of some sort of default config, the “something_defconfig”. Many Tegra platforms inherit hardware from earlier releases, and sometimes you will see just “tegra” for a series of different Jetsons, or even drivers named for a Tegra release which is different from what you have. When you see references in the docs to “make tegra_defconfig”, what you are doing is creating a “.config” file at the base of your kernel output location, and that config is valid as a base starting point for that platform.
On a running system you will also find a pseudo file (part of a driver and not actually on the hard drive) of “/proc/config.gz”. I tend to start with this, because other than one detail, this is an absolute guarantee that the kernel I am building (the features selected) are an exact match to what I already have in place.
After you make tegra_defconfig, or after you’ve placed a gunzip decompressed copy of the “/proc/config.gz” in the build output location as name “.config”, then you can make modifications. Or just build things. However, that “one detail” which does not exist as an exact match to the running system still needs to be addressed.
When a kernel loads a module it needs to know where to find the module. The command “uname -r” is a combination of the base kernel version, e.g., “4.4.38”, combined with a suffix. The suffix is usually from the “.config” file’s “CONFIG_LOCALVERSION”. You’ll see this as “-tegra” by default. So if you were to manually edit the “.config”, then this is what you’d want for that one line:
CONFIG_LOCALVERSION="-tegra"
A module loads at “/lib/modules/$(uname -r)/”, and thus if you didn’t preserve the old “uname -r”, and install a new kernel, then 100% of the modules need to be rebuilt and put back into place with the new “uname -r”. Sometimes you actually want this, e.g., if you changed a feature in the kernel Image which invalidates the existing modules. Before and after you install a kernel Image make sure to check “uname -r” to find that suffix to append to “CONFIG_LOCALVERSION”, and to see if what you installed is what you expected.
Regardless of what you build I suggest that after you configure the kernel that you always build the “Image” target once to see if it works. This also sets up some dependencies which are needed before building modules or device tree (there are other ways to set up those dependencies, but if you can’t build Image, then none of your other build targets will be valid). If for example you build modules prior to building Image (or alternatively, “make modules_depend”) build will fail due to invalid configuration.
The “make dtbs” target builds a device tree binary. Device tree source files are “.dts” files, device tree binary files are “.dtb” files, and a “.dtsi” file is a device tree include file only used by the kernel. You’ll need to consult the documentation of the particular L4T/JetPack release for how to install a device tree. The particular device tree binary you put in the flash software area will have a name which is based on both the module and the carrier board (and perhaps the carrier board revision). If you ever flash a Jetson, then always save a log so you can see which specific device tree files are used for your particular board.
FYI, the device tree does not compile “into” the kernel. This is a separate and independent file and ends up in a partition, but the kernel does see its content. Modules will end up somewhere in “/lib/modules/$(uname -r)/”. The “Image” file (the uncompressed integrated kernel) will be in “/boot/”.
You can always ask more questions, but due to how details change with release, I suggest going through the official documents and asking about specific instructions after naming your exact JetPack/L4T/SDK Manager version. You already mentioned for a TX1 devel board, but you’d want to mention that again at the start of any new thread.