Applying updated kernel on host

I would like to be able to build the kernel from source on the host and not go through the process of reflashing.

If someone could provide a step by step guide of what files to transfer to the orin nano and in what directories to avoid flashing that would be great.

Side note: is it possible to compile from source on the orin nano itself? (avoid cross compilation)

It is possible. What happens depends a lot on what you wish to change. If you are merely adding features via module format, then it is quite simple. If you are integrating something new into the kernel, then it isn’t difficult, but it is a lot more detailed, and not as risk-free. Finally, if you use external media, then you will have an initrd, and if that initrd needs your module to boot with, then you need to modify the initrd. If the kernel you change (the “integrated” versus “module” change issue) is changed such that all modules must be built, and if you combine this with an initrd (typically used for booting external devices or devices with unusual filesystem characteristics, e.g., encrypted), every module in the initrd probably needs to change.

If you look at this, it is the configuration of your current kernel (with one exception), and shows the “symbols” which define configuration:
zcat /proc/config.gz
(the part which is not reflected in the running system is the CONFIG_LOCALVERSION; running “uname -r” can reveal the CONFIG_LOCALVERSION via the suffix)

Are you only adding features? Do you know the symbol(s)?

Incidentally, if you have the disk space (a lot is required), then it is often easier to compile directly on the Jetson. Sometimes you might run into a required gcc version, but usually it is simple (there are no cross tools to install).

When JetPack flashes it is L4T which is installed (Linux for Tegra is Ubuntu plus NVIDIA drivers). Your L4T release can be found via:
head -n 1 /etc/nv_tegra_release

Documentation and the correct software for a given release can be found tied to that L4T release. See:
https://developer.nvidia.com/linux-tegra

Almost forgot: Most of this involves setting up a configuration which is an exact match to the current running configuration, followed by using a dependency-aware editor (such as nconfig or menuconfig as a make target) to add features, preferably as a module.

Just to make sure I am not going down the wrong path and to provide way more context. My end goal is to create a device tree and device driver for an omnivision camera through the csi port. I currently have created the device tree, compiled it to a dtbo, moved it to /boot/ and edited the /boot/extlinux/extlinux.conf file. I have confirmed it has applied by checking the /proc/device-tree/.

I have a device driver that I need to test but it is currently an LKM. When I boot up my kernel with the overlay applied and then insmod my LKM the probe function is not getting called (I am assuming this is expected behavior). My thought is that if I “bake” the driver into the kernel the probe function will get called on boot up.

Based on your reply it sounds like I need to go ahead and modify the initrd? Is this the right approach for my current objective?

You probably don’t need to work on the initrd for this. I will back up and provide some context.

When the boot environment is running this is its own operating system. Eventually, its one goal is to overwrite itself with Linux. Prior to doing that the environment must be set up, and whatever Linux needs must be loaded. So far an initrd is not needed, although a device tree might if the boot environment needs access to hardware to set up prior to Linux loading. An example of a requirement is to access the boot device; a “deeper” need would be to understand the filesystem type if we must retrieve content from disk. For example, the kernel Image might come from an ext4 partition, or from a binary partition. No boot stage driver is needed to retrieve binary content if the device itself can be accessed, but the boot stage requires an ext4 driver to retrieve a kernel Image from an ext4 formatted partition.

There are many filesystem types. The initrd itself is a very simple tree-based RAM filesystem understood by most every bootloader. These days a lot of boot environments can handle VFAT or ext4 as well. The XFS filesystem is popular for some large media, and is rarely understood. If your kernel is on an XFS filesystem paritition, then you’d have to put the kernel itself in the initrd. If instead your kernel is an ext4/boot”, but you have modules in “/lib/modules/”, and “/lib/modules/” is itself on XFS, then all is well if and only if the kernel Image self-contains the XFS driver (an “=y” configuration, not a module “=m”). This is because a module to read XFS on an XFS filesystem would be impossible.

As a kind of adapter, imagine this initial ramdisk is a small filesystem with “/lib/modules/”. The kernel is on ext4. Because the bootloader understands initrd and ext4, we can (A) load the kernel Image, and then (B) the kernel can load it’s ext4 driver from the initrd (both the kernel and the bootloader understand initrd ramdisk filesystems). Now the bootloader does not need to read XFS because handing over control to the Linux kernel allows the Linux kernel to read XFS. At this point the root is “pivoted” to be reparented on the actual disk which runs XFS (the “/boot” is still on ext4). This pivot is the “pivot_root”. The kernel simply thinks the filesystem root is now the XFS disk.

If we had wanted to boot directly to XFS, there would be a requirement to port the driver to the bootloader even though Linux understands XFS in a module. Not even Linux understanding XFS would help if the Image file is itself on XFS; this would mandate a driver for the bootloader.

Is your driver part of boot? It seems unlikely for a camera. Did you invalidate the existing modules by changing an integrated feature of the kernel Image? Probably. If you don’t use an initrd at the moment, then this means you must replace all kernel modules in “/lib/modules/$(uname -r)/kernel/”, and indirectly, you should change your CONFIG_LOCALVERSION to obtain a new “uname -r” (otherwise you have all new modules in the old directory).

But, here’s the part which makes life much more difficult: If you do use an initrd, then the modules in this are a “minimal” subset of the modules in “/lib/modules/$(uname -r)/kernel/”. Those are also invalidated and probably won’t load (they might if lucky). If you used “=y” in the main Image in combination with an initrd, then you must likely also recreate a new initrd with the valid version of the modules which are in that initrd.

As far as the module not automatically loading this would be an entirely different topic. Making this as a built-in is likely the wrong way to go about this. Is this camera “plug-n-play”? Is it USB or PCI? This plays a big part in answering. If not plug-n-play, and if the device cannot self-report what it is and where it is, then the device tree is what you need. If the device is USB or some other plug-n-play, then there are “hot plug” changes you can use to associate the driver to the device.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.