Thank you for the suggestion! This link was indeed helpful.
I understand that there will be many steps involved to apply the patch.
Just to document my process for the future, I think these are the steps I need to do:
download the l4t sources
user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra/source
user@host:source $ ./source_sync -k -t jetson_36.4.4
(jetson_36.4.4 is the release-tag, which is referred to in the instructions and can be found in the release notes. The release notes are the Jetson Linux Release Notes which can be found on the Jetson Linux Download Page)
NOTE: The Linux_for_Tegra directory was obtained by flashing Jetpack 6.2.1 with SDKManager, but could have also been downloaded on the Jetson Linux Download Page. However, this means that I already have a populated rootfs. I am opting for keeping it that way and partially overwrite it with the following commands. Reasoning is, that I might be otherwise missing things which I didn’t think of. Surely this can fail, then I’ll do this over and start completely from scratch.
get the cross-compilation toolchain
The toolchain can be also found on the Jetson Linux Download Page.
Instructions to install them are here, but here the exact commands I did:
user@host:~ $ mkdir l4t-gcc && cd l4t-gcc
user@host:l4t-gcc $ wget``https://developer.nvidia.com/downloads/embedded/l4t/r36_release_v3.0/toolchain/aarch64--glibc--stable-2022.08-1.tar.bz2
user@host:l4t-gcc $ tar xf aarch64--glibc--stable-2022.08-1.tar.bz2
user@host:l4t-gcc $ echo "export CROSS_COMPILE=$HOME/l4t-gcc/aarch64–glibc–stable-2022.08-1/bin/aarch64-buildroot-linux-gnu-" > set_env
user@host:l4t-gcc $ . ./set_env
It differs from the instructions only in the added command to directly download the toolchain with wget, and saving the environment variable in a textfile, so I don’t have to remember it or clutter my bashrc.
apply the patch
Exciting. We shall apply the patch.
Get the patch source from here (copy&pasting in this forum changes some characters for me, otherwise I’d simply include it in here)
user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra/source
user@host:source $ echo "<patch source>" > gpio_jp6_patch
user@host:source $ cd kernel/kernel-jammy-src
user@host:kernel-jammy-src$ patch -p1 < ../../gpio_patch
patching file drivers/pinctrl/tegra/pinctrl-tegra.c
patching file drivers/pinctrl/tegra/pinctrl-tegra.h
instructions on how to apply a path to a linux kernel can be found here
So far so easy.
Now we’ll be building the kernel and be flashing it. Let’s see how that is going to go.
I am not entirely sure, if I need to include the device tree overlay files which were forged out of the blood and tears of the provided excel sheet, or if I can use the wonderful /opt/nvidia/jetson-io/jetson-io.py on the Jetson after the flash. I opt for the hopeful route, and take into consideration that I may have to do it the hard way after all. But if it works with jetson-io.py, then I can avoid installing Windows in the future, which may also avoid Windows breaking the Ubuntu host installation, which of course happened this time. My recommendation: install Windows in a Virtualbox. Excel has a free 5 days trial period, and is perhaps even able to generate device tree files after that - who knows. But let’s get back to it.
build the jetson linux kernel
original instructions are here
user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra/source
user@host:source $ . ~/l4t-gcc/set_env
user@host:source $ make -C kernel
note that I’m sourcing the set_env that I created when getting the cro-co toolchain
user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra
user@host:Linux_for_Tegra $ export INSTALL_MOD_PATH="$(pwd)/rootfs/"
user@host:Linux_for_Tegra $ cd source
user@host:source $ sudo -E make install -C kernel
user@host:source $ cp kernel/kernel-jammy-src/arch/arm64/boot/Image ../kernel/Image
build the out-of-tree modules
not entirely sure if the patch is in- or out-of-tree. As we already have a populated rootfs, this may not be necessary. But we’re not getting paid for sitting around, so let’s do it.
user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra/source
user@host:source $ . ~/l4t-gcc/set_env
user@host:source $ export KERNEL_HEADERS="$(pwd)/kernel/kernel-jammy-src"
user@host:source $ make modules
user@host:source $ export INSTALL_MOD_PATH=$(pwd)/../rootfs/
user@host:source $ sudo -E make modules_install
user@host:source $ cd ..
user@host:Linux_for_Tegra $ sudo ./tools/l4t_update_initrd.sh
in the instructions it says that I could be also doing this natively on the target machine? So, could I update the patched kernel without re-flashing the whole thing? Well… I probably misunderstand. Let’s ignore this piece of information for now and go on.
building the DTBs
user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra/source
user@host:source $ . ~/l4t-gcc/set_env
user@host:source $ export KERNEL_HEADERS="$(pwd)/kernel/kernel-jammy-src"
user@host:source $ make dtbs
user@host:source $ cp kernel-devicetree/generic-dts/dtbs/* ../kernel/dtb/
Alright, now we have a hopefully properly patched rootfs. Let’s figure out how to use the flashing script. Not sure if I need to flash bootloader, kernel and rootfs, or just one of them. Should I use flash.sh, nvsdkmanager_flash.sh or l4t_initrd_flash.sh? Let’s investigate.
Well, first I read this beautiful guide on the root file system and realized I might want to have a default user. So I created one.
create default user
of course, I used a prolific username, password and hostname
user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra/tools
user@host:tools $ sudo ./l4t_create_default_user.sh -u <username> -p <password> -a -n <hostname> --accept-license
On the question which flashing script I should use, I opted for flash.sh because it has the shortest name.
flash
user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra
user@host:Linux_for_Tegra $ sudo ./flash.sh jetson-agx-orin-devkit nvme0n1p1
So, The script ended with Flashing completed (and more, but no errors). it took a while and then the Jetson rebooted.. but I am amazed. It is starting up with the same OS as before. Not really sure what is going on. It seems that it didn’t flash at all. So strange. Just to be sure, I checked that the GPIO is still broken, and of course it is. Where did it flash? /dev/null? Did I just destroy a drive on the host machine? What happened?
Phew… this quick little patch feels like jumping through a labyrinth of falling rocks. I have no idea if I am close to the exit or went down the wrong path somewhere.
I guess, this may be because flash.sh “optionally flashes the root file system to an internal or external storage device”… duh.. okay. But… how do I flash this so the OS ends up on the nvme drive? Also, is the nvme drive internal or external if it is connected to the DevKit’s internal M.2 slot? My guess is external, as it’s probably referring to the Orin board, not the DevKit. I hope I don’t have to also prepare a partition layout.
Let’s try nvsdkmanager_flash.sh and when (not if) this doesn’t work, we’ll go for the last and hopefully correct script. However, I am sure the other scripts would also work somehow, if I would be able to crack the riddle of which parameters I need to set.
Anyways, I leave this running now, and come back to this post when it’s done.
[Edit:] Amazing! The flash worked!
flash correctly
user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra
user@host:Linux_for_Tegra $ sudo ./nvsdkmanager_flash.sh --storage nvme0n1p1
Now, let’s see if the gpio works… nope
I did:
configure 40pin header
user@jetson: $ sudo /opt/nvidia/jetson-io/jetson-io.py
→ Configure Jetson 40pin Header
→ Configure header pins manually
→ set [*] spi1 (19,21,23,24,26)
→ Back
→ Save Pin Changes
→ Save and reboot to reconfigure pins
setup user
user@jetson: $ sudo usermod -a -G dialout user
user@jetson: $ sudo usermod -a -G gpio user
user@jetson: $ modprobe spidev # enable SPI
user@jetson: $ echo “spidev” | tee -a /etc/modules # enable SPI on startup
I will now attempt to add the dtsi files from the pinmux sheet to the flash.
These are the settings (rightclick on the image and open in new tab if you can’t read it):

copy dtsi files
note: I have the pinmux dtsi files in ~/Documents/pinmux/02_pinmux_sp1/
$ user@jetson:~ $ cd ~/nvidia/nvidia_sdk/SOIL_JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra/bootloader
$ user@jetson:bootloader $ cp ~/Documents/pinmux/02_pinmux_sp1/Orin-jetson_agx_orin-pinmux.dtsi ./generic/BCT/pinmux.dtsi
$ user@jetson:bootloader $ cp ~/Documents/pinmux/02_pinmux_sp1/Orin-jetson_agx_orin-gpio-default.dtsi ./gpio-default.dts
adjust board.conf
user@jetson:~ $ cd ~/nvidia/nvidia_sdk/SOIL_JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra
user@jetson:Linux_for_Tegra $ cat jetson-agx-orin-devkit.conf | grep "^source"
source "${LDK_DIR}/p3737-0000-p3701-0000.conf.common";
user@jetson:Linux_for_Tegra $ vim ./p3737-0000-p3701-0000.conf.common # edit PINMUX
user@jetson:Linux_for_Tegra $ cat ./p3737-0000-p3701-0000.conf.common | grep PINMUX
PINMUX_CONFIG="pinmux.dtsi";
user@jetson:Linux_for_Tegra $ vim ./bootloader/generic/BCT/pinmux.dtsi
user@jetson:Linux_for_Tegra $ cat ./bootloader/generic/BCT/pinmux.dtsi | grep 'dtsi"'
#include "./gpio-default.dtsi"
flash again
user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra
user@host:Linux_for_Tegra $ sudo ./nvsdkmanager_flash.sh --storage nvme0n1p1
Okay, if this doesn’t work, perhaps I shouldn’t use the prepopulated rootfs, and start from scratch. But first, let’s see…