A shorter answer would be that both sources should be the same, but that the configuration used during build may have differed from the default defconfig
target (R36.x and newer) or tegra_defconfig
target (R35.x and earlier). A more detailed answer follows.
Let’s talk about two different NVIDIA content additions…
First, if one were installing the flash software manually (without JetPack/SDK Manager), then the “driver package” is the part which is the actual flash software. In recovery mode the Jetson is a custom USB device requiring a custom driver, and this is the driver. This creates the “Linux_for_Tegra/
” subdirectory and almost all of its content when it is unpacked.
Next there is the sample root filesystem (rootfs). This unpacks into “Linux_for_Tegra/rootfs/
”, and this is purely Ubuntu without NVIDIA content. From “Linux_for_Tegra/
” the command “sudo ./apply_binaries.sh
” is run, and this is what what populates NVIDIA content into the sample rootfs and transforms it from being called Ubuntu to instead being called “Linux for Tegra” (L4T).
Some content arrives via this latter apply_binaries.sh
.
NVIDIA could include more content, but mix of what is provided is partly from apply_binaries.sh
and partly from kernel content. Before L4T R36.x there was quite a bit of significant content from the out-of-tree content. The Realtek driver support was never part of the out-of-tree content, but the configuration of the kernel build was from NVIDIA. That configuration (the tegra_defconfig
target prior to L4T R36.x, and no just defconfig
with mainline R36.x) is no longer from NVIDIA.
When NVIDIA put together its initial software which it ships with it probably did include a customization of the defconfig
. I have not looked, but if you still have the exact original Image
available to boot, then you could boot to that, and then literally ask the kernel what config it was built with. This command shows the build config (well, it is missing the CONFIG_LOCALVERSION
, but otherwise this is exact):
zcat /proc/config.gz | less -i
(you could then use “/realtek<enter>
” or “/rtl<enter>
” or “/8169<enter>
” to search for those)
You could also just use cp
to copy that file somewhere else (it isn’t a real file, it lives in RAM and is the kernel pretending to be a file with a list of its build time configuration), and then examine that.
I also have to caution you that the initrd
can also cause strangeness if you don’t understand it. When you change the actual kernel, and not just modules, then the initrd
(when used; an initrd
not always used) also needs to be updated.
The changes you’ve made probably demand not just the new kernel Image
file be used, but 100% of all modules. The initrd
is an adapter between boot stages and the kernel mounting the “/
” filesystem (the root filesystem, or rootfs). The initrd
is itself a very simple filesystem in RAM which eventually vanishes, but any modules required for reaching the “pivot root” step which discards the initrd
rootfs and switches to the real “/
” rootfs might imply there are modules inside of the initrd
itself. The initrd
contains a small subset of the modules which need loading to reach mount of “/
”. Your extlinux.conf
is nice in the sense that it has the old kernel still available, but you also need to create a second initrd
; that initrd
would need to contain the same modules that the original contains, but built against the new kernel. Any time a kernel changes in any way other than just modules it implies the possibility that the previous modules will not load. There is a binary interface, and the address changes might make this fail to load when the kernel itself changes.
However, if no modules in the initrd
are required for completing boot to the point of mounting “/
”, then the initrd
won’t cause any problems even if the modules don’t load. It is possible though, I do not know about what is working or failing.
Taking the explanation a bit further, when a configuration item is included with the “=y
”, then this means the driver is integrated into the base kernel. When configured with the “=m
”, then this creates a module and is not part of the base kernel. The feature will be one of “integrated” or “modular”. Changing modular features is more or less without consequence and quite simple. Once you change the integrated features you risk invalidating the initrd
and all modules (it isn’t a guarantee of invalidating everything, but it does mean you cannot assume otherwise, and it is then time to rebuild all modules; this in turn means you might also need a new initrd
.
Now if you started with a configuration which 100% exactly matches the original kernel, including CONFIG_LOCALVERSION
, and then all you did was to add or remove modules, it would result in not needing a new Image
at all. The only requirement would be copying the module in to the right place and running “sudo depmod -a
” or rebooting. See the suffix of the output from “uname -r
”; the NVIDIA kernel has CONFIG_LOCALVERSION
set to “-tegra
”, and this becomes part of the module load path (this is the part most people forget to adjust if they intend to only add modules).
You’ve changed the “uname -r
” to have the prefix “-custom
”, and this is good if you are intending to replace the Image
file itself. However, if you were just adding modules, then there was no need for a new Image
.
The original question though is about why those drivers were not present. I will suggest you check the driver symbols inside of the “/proc/config.gz
” for those specific kernels; boot to one, save that (named after the “uname -r
” probably), and do that again in the new kernel. Compare.