This won’t be able to answer, but it might be useful for your case…
Different carrier boards rarely require a different kernel unless there is some hardware on the carrier board which uses a driver. An example is that there are some serial UARTs as part of the SoC, and the drivers would be for that. There are usually other serial drivers as well. If you had a custom carrier board which has an implementation of a serial UART from some other unknown chipset, then you’d need a driver for that.
The biggest issue with custom carrier boards is due to the fact that there are pins on the Jetson module which can have more than one possible use. For example, one designer might find the trace layout is best from certain pins for some function, while someone else will perform the same function to alternate pins. This is the function of the firmware to tell the module which function to attach to which pins. In particular, this is the device tree. This device tree is usually what you must adjust.
Drivers tend to talk to hardware with a physical address on a bus, and not via a virtual address on the memory controller. To find such devices there are basically two possibilities: (A) The device can use some sort of plug-n-play mechanism which allows the device to self-describe, or (B) one can code in the location of the physical address and the driver working with that address. The former is only available on subsystems which can broadcast device presence and allow drivers to take ownership if they deem the device relevant to that driver; the latter is the device tree naming the physical address, the driver, and perhaps parameters to pass to the driver.
The device tree is not really part of the kernel or drivers as it is really a set of arguments passed to the drivers. However, because there is no need for a device tree fragment which applies to an unused driver, if you configure the Linux kernel source, then you can build not only the integrated (non-modular) main kernel image (for Jetsons, the file named Image
, which is also a build target), and modules (which are also part of the kernel, but they can be dynamically loaded or unloaded), you also have the option to build a device tree target (the “dtbs
” target). Keep in mind that there are also build targets for propagating a configuration once you have set up the file; building the Image
target propagates that configuration, but if you were to do something like build a module without building the Image
, then you’d use a target such as “modules_prepare
” to propagate (all of this is known as the “kconfig” system).
On a running Jetson you can find a pseudo file which lists the existing kernel’s configuration:
/proc/config.gz
(you could see the content with “zcat /proc/config.gz
”; there are a number of ways to prevent that from scrolling away, or to filter for particular symbols)
Note that if you have a default kernel, then this is most likely nearly the same as the configuration produced from the “tegra_defconfig
” target. I say “almost” because there is a special “naming” parameter for customization:
CONFIG_LOCALVERSION
(this takes a string, and the default is “CONFIG_LOCALVERSION=-tegra
”)
When a kernel looks for modules it uses this path:
/lib/modules/$(uname -r)/kernel
The output of the command “uname -r
” has a prefix of the kernel’s source version, and the suffix is the CONFIG_LOCALVERSION
. Just as a contrived example, if your kernel source is 5.15.0, and you set CONFIG_LOCALVERSION
to “=-tegra
” prior to build, then “uname -r
” will respond as “5.15.0-tegra
”. This is how you tell where the modules will be found.
If your kernel has the same integrated (non-modular) features, but only adds modules, then chances are you can just build the modules and put your new modules in the correct place. If your kernel deviates substantially, e.g., changes features built in to the kernel, then the old modules probably won’t load correctly. Thus you would keep the same “uname -r
” (the same CONFIG_LOCALVERSION
string) if you are only changing modules, but you might change it to something like “CONFIG_LOCALVERSION=-modified
” if that base configuration changes (which allows both the old and new kernels to coexist and look for their modules in their own directories).
If you’ve changed the wiring layout of your carrier board, but it does not introduce new hardware (e.g., if it uses the SoC’s hardware and does not add some non-plug-n-play device to the carrier board itself), then you’d only change the device tree to tell it about that layout change.
I don’t use it, but the ch341 is a well-known driver, and there is a strong chance the driver is already there (but perhaps the hardware address is not known, and so loading the driver causes an error). Remember the “/proc/config.gz
”? What do you see from this:
zcat /proc/config.gz | grep -i 'ch341'
Now if that UART is connected over USB you wouldn’t worry about the device tree. In that case the hot plug system would broadcast the plug-in event, and something in udev
would cause the driver to load. If this is directly wired to a bus, then the device tree would more or less announce to the driver where to talk to the UART.
There are cases where a manufacturer tries to provide a driver, but if it is not compiled against that specific kernel configuration, and if the driver is available in the existing kernel, then you are better off just using the one in the kernel source. If the driver is different, then it still has to be compatible with the existing kernel configuration and version.
Someone else can probably help with the specifics of the device tree edits, but you might provide details of how the chipset is connected, and if you’ve made any device tree edits. If you have not edited the device tree, then you should state exactly how your carrier board layout follows or deviates from the reference carrier board. If you were to purchase a third party carrier board, then mostly it would use the same software as the development kit, but it would likely have device tree edits.