How to apply GPIO patch for Jetpack 6.x for Orin AGX Devkit

Hi there,

Perhaps I am an especially slow learner, but so far I have not yet figured out how to make GPIO on Jetpack 6.2.1 for Jetson AGX Orin Devkit 64GB work. Specifically I want to use SPI1.

It works with Jetpack 5.

Currently I have flashed Jetpack 6.2.1 with SDKManager on the nvme drive and upgraded it to R36.4.7.

My hunch is, that this item in the AGX Orin FAQ is exactly what I need. The description sounds promising.

However, how do I apply this patch?

Subquestions:
Where do I find the files to be patched?
Do I need to re-flash both bootloader and OS to apply the patch? Can I do this with SDKManager, or do I need to use the flash scripts in Linux_for_Tegra directory?
Do I need to generate dtsi files with the pinmux sheet, or can I use /opt/nvidia/jetson-io/jetson-io.py as in Jetpack 5 to enable SPI1?

Thank you for reading this. Ideally I’m looking for step by step instructions from the current (broken) state, to a working SPI1.

refer this link to download the sources for updattion:

you should use flashing script for flashing..

1 Like

Thank you for the suggestion! This link was indeed helpful.

I understand that there will be many steps involved to apply the patch.

Just to document my process for the future, I think these are the steps I need to do:

download the l4t sources

user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra/source
user@host:source $ ./source_sync -k -t jetson_36.4.4

(jetson_36.4.4 is the release-tag, which is referred to in the instructions and can be found in the release notes. The release notes are the Jetson Linux Release Notes which can be found on the Jetson Linux Download Page)

NOTE: The Linux_for_Tegra directory was obtained by flashing Jetpack 6.2.1 with SDKManager, but could have also been downloaded on the Jetson Linux Download Page. However, this means that I already have a populated rootfs. I am opting for keeping it that way and partially overwrite it with the following commands. Reasoning is, that I might be otherwise missing things which I didn’t think of. Surely this can fail, then I’ll do this over and start completely from scratch.

get the cross-compilation toolchain

The toolchain can be also found on the Jetson Linux Download Page.
Instructions to install them are here, but here the exact commands I did:

user@host:~ $ mkdir l4t-gcc && cd l4t-gcc
user@host:l4t-gcc $ wget``https://developer.nvidia.com/downloads/embedded/l4t/r36_release_v3.0/toolchain/aarch64--glibc--stable-2022.08-1.tar.bz2
user@host:l4t-gcc $ tar xf aarch64--glibc--stable-2022.08-1.tar.bz2
user@host:l4t-gcc $ echo "export CROSS_COMPILE=$HOME/l4t-gcc/aarch64–glibc–stable-2022.08-1/bin/aarch64-buildroot-linux-gnu-" > set_env
user@host:l4t-gcc $ . ./set_env

It differs from the instructions only in the added command to directly download the toolchain with wget, and saving the environment variable in a textfile, so I don’t have to remember it or clutter my bashrc.

apply the patch

Exciting. We shall apply the patch.

Get the patch source from here (copy&pasting in this forum changes some characters for me, otherwise I’d simply include it in here)

user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra/source
user@host:source $ echo "<patch source>" > gpio_jp6_patch
user@host:source $ cd kernel/kernel-jammy-src
user@host:kernel-jammy-src$ patch -p1 < ../../gpio_patch
patching file drivers/pinctrl/tegra/pinctrl-tegra.c
patching file drivers/pinctrl/tegra/pinctrl-tegra.h

instructions on how to apply a path to a linux kernel can be found here

So far so easy.
Now we’ll be building the kernel and be flashing it. Let’s see how that is going to go.
I am not entirely sure, if I need to include the device tree overlay files which were forged out of the blood and tears of the provided excel sheet, or if I can use the wonderful /opt/nvidia/jetson-io/jetson-io.py on the Jetson after the flash. I opt for the hopeful route, and take into consideration that I may have to do it the hard way after all. But if it works with jetson-io.py, then I can avoid installing Windows in the future, which may also avoid Windows breaking the Ubuntu host installation, which of course happened this time. My recommendation: install Windows in a Virtualbox. Excel has a free 5 days trial period, and is perhaps even able to generate device tree files after that - who knows. But let’s get back to it.

build the jetson linux kernel

original instructions are here

user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra/source
user@host:source $ . ~/l4t-gcc/set_env
user@host:source $ make -C kernel

note that I’m sourcing the set_env that I created when getting the cro-co toolchain

user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra
user@host:Linux_for_Tegra $ export INSTALL_MOD_PATH="$(pwd)/rootfs/"
user@host:Linux_for_Tegra $ cd source
user@host:source $ sudo -E make install -C kernel
user@host:source $ cp kernel/kernel-jammy-src/arch/arm64/boot/Image ../kernel/Image

build the out-of-tree modules

not entirely sure if the patch is in- or out-of-tree. As we already have a populated rootfs, this may not be necessary. But we’re not getting paid for sitting around, so let’s do it.

user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra/source
user@host:source $ . ~/l4t-gcc/set_env
user@host:source $ export KERNEL_HEADERS="$(pwd)/kernel/kernel-jammy-src"
user@host:source $ make modules
user@host:source $ export INSTALL_MOD_PATH=$(pwd)/../rootfs/
user@host:source $ sudo -E make modules_install
user@host:source $ cd ..
user@host:Linux_for_Tegra $ sudo ./tools/l4t_update_initrd.sh

in the instructions it says that I could be also doing this natively on the target machine? So, could I update the patched kernel without re-flashing the whole thing? Well… I probably misunderstand. Let’s ignore this piece of information for now and go on.

building the DTBs

user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra/source
user@host:source $ . ~/l4t-gcc/set_env
user@host:source $ export KERNEL_HEADERS="$(pwd)/kernel/kernel-jammy-src"
user@host:source $ make dtbs
user@host:source $ cp kernel-devicetree/generic-dts/dtbs/* ../kernel/dtb/

Alright, now we have a hopefully properly patched rootfs. Let’s figure out how to use the flashing script. Not sure if I need to flash bootloader, kernel and rootfs, or just one of them. Should I use flash.sh, nvsdkmanager_flash.sh or l4t_initrd_flash.sh? Let’s investigate.

Well, first I read this beautiful guide on the root file system and realized I might want to have a default user. So I created one.

create default user

of course, I used a prolific username, password and hostname
user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra/tools
user@host:tools $ sudo ./l4t_create_default_user.sh -u <username> -p <password> -a -n <hostname> --accept-license

On the question which flashing script I should use, I opted for flash.sh because it has the shortest name.

flash

user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra
user@host:Linux_for_Tegra $ sudo ./flash.sh jetson-agx-orin-devkit nvme0n1p1

So, The script ended with Flashing completed (and more, but no errors). it took a while and then the Jetson rebooted.. but I am amazed. It is starting up with the same OS as before. Not really sure what is going on. It seems that it didn’t flash at all. So strange. Just to be sure, I checked that the GPIO is still broken, and of course it is. Where did it flash? /dev/null? Did I just destroy a drive on the host machine? What happened?

Phew… this quick little patch feels like jumping through a labyrinth of falling rocks. I have no idea if I am close to the exit or went down the wrong path somewhere.

I guess, this may be because flash.sh “optionally flashes the root file system to an internal or external storage device”… duh.. okay. But… how do I flash this so the OS ends up on the nvme drive? Also, is the nvme drive internal or external if it is connected to the DevKit’s internal M.2 slot? My guess is external, as it’s probably referring to the Orin board, not the DevKit. I hope I don’t have to also prepare a partition layout.

Let’s try nvsdkmanager_flash.sh and when (not if) this doesn’t work, we’ll go for the last and hopefully correct script. However, I am sure the other scripts would also work somehow, if I would be able to crack the riddle of which parameters I need to set.
Anyways, I leave this running now, and come back to this post when it’s done.

[Edit:] Amazing! The flash worked!

flash correctly

user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra
user@host:Linux_for_Tegra $ sudo ./nvsdkmanager_flash.sh --storage nvme0n1p1

Now, let’s see if the gpio works… nope

I did:

configure 40pin header

user@jetson: $ sudo /opt/nvidia/jetson-io/jetson-io.py

→ Configure Jetson 40pin Header
→ Configure header pins manually
→ set [*] spi1 (19,21,23,24,26)
→ Back
→ Save Pin Changes
→ Save and reboot to reconfigure pins

setup user

user@jetson: $ sudo usermod -a -G dialout user
user@jetson: $ sudo usermod -a -G gpio user
user@jetson: $ modprobe spidev # enable SPI
user@jetson: $ echo “spidev” | tee -a /etc/modules # enable SPI on startup

I will now attempt to add the dtsi files from the pinmux sheet to the flash.
These are the settings (rightclick on the image and open in new tab if you can’t read it):

Screenshot from 2025-11-04 20-33-03

copy dtsi files

note: I have the pinmux dtsi files in ~/Documents/pinmux/02_pinmux_sp1/

$ user@jetson:~ $ cd ~/nvidia/nvidia_sdk/SOIL_JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra/bootloader
$ user@jetson:bootloader $ cp ~/Documents/pinmux/02_pinmux_sp1/Orin-jetson_agx_orin-pinmux.dtsi ./generic/BCT/pinmux.dtsi
$ user@jetson:bootloader $ cp ~/Documents/pinmux/02_pinmux_sp1/Orin-jetson_agx_orin-gpio-default.dtsi ./gpio-default.dts

adjust board.conf

user@jetson:~ $ cd ~/nvidia/nvidia_sdk/SOIL_JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra
user@jetson:Linux_for_Tegra $ cat jetson-agx-orin-devkit.conf | grep "^source"
source "${LDK_DIR}/p3737-0000-p3701-0000.conf.common";
user@jetson:Linux_for_Tegra $ vim ./p3737-0000-p3701-0000.conf.common # edit PINMUX
user@jetson:Linux_for_Tegra $ cat ./p3737-0000-p3701-0000.conf.common | grep PINMUX
PINMUX_CONFIG="pinmux.dtsi";
user@jetson:Linux_for_Tegra $ vim ./bootloader/generic/BCT/pinmux.dtsi
user@jetson:Linux_for_Tegra $ cat ./bootloader/generic/BCT/pinmux.dtsi | grep 'dtsi"'
#include "./gpio-default.dtsi"

flash again

user@host:~ $ cd ~/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra
user@host:Linux_for_Tegra $ sudo ./nvsdkmanager_flash.sh --storage nvme0n1p1

Okay, if this doesn’t work, perhaps I shouldn’t use the prepopulated rootfs, and start from scratch. But first, let’s see…

It is in kernel source, please refer to the steps in Kernel Customization — NVIDIA Jetson Linux Developer Guide to sync and build kernel.

You can simply replace /boot/Image for kernel or put the kernel module to the device manually.

If you are using the devkit, you can simply use jetson-io to enable SPI function.

Sorry that I’m not clear about your use case and the requirement here.
Do you want to use the pin as GPIO or SPI function?

From the following screenshot you shared, it seems you want to use those pins as GPIO?
Screenshot from 2025-11-04 20-33-03

1 Like

@KevinFFF Thank you for taking the time to respond!

It is in kernel source, please refer to the steps in Kernel Customization — NVIDIA Jetson Linux Developer Guide to sync and build kernel.

Yes, thank you. I think I found and patched them successfully following these instructions.

You can simply replace /boot/Image for kernel or put the kernel module to the device manually.

So as the last step to build the jetson linux kernel I could copy the Image file directly to /boot/Image of the Jetson, reboot and be done?

copy kernel image on jetson

So instead of:
user@host:source $ cp kernel/kernel-jammy-src/arch/arm64/boot/Image ../kernel/Image

I would do sth like:
user@host:source $ scp kernel/kernel-jammy-src/arch/arm64/boot/Image user@jetson:/tmp/Image
user@host:source $ ssh user@jetson sudo -S mv /tmp/Image /boot/Image

If you are using the devkit, you can simply use jetson-io to enable SPI function.

I am using the devkit. So, I can completely skip the part where I copy the dtsi files and adjust board.conf. This sounds great.

Sorry that I’m not clear about your use case and the requirement here.
Do you want to use the pin as GPIO or SPI function?

From the following screenshot you shared, it seems you want to use those pins as GPIO?

I want to use SPI1on the 40pin header. We have an A/D converter with SPI serial interface connected to the pins to read an analogue sensor. On Jetpack 5 (same Jetson) it works. Sorry if I was unclear about this. It is very possible that I made errors when filling in the pinmux sheet.

Just to make sure I don’t misunderstand, I should be able to do the following:
→ flash L4T R36.4.4 with SDKManager or flash script
→ get the kernel sources
→ apply the patch
→ build the kernel
→ copy the kernel image on the jetson
→ enable SPI1 with jetson-io.py
→ add user to gpio & dialout groups
→ enable spidev module
→ reboot
→ done?

Then I don’t understand why it didn’t work yet, but I’ll try to do it again from scratch.

I would suggest you updating them to the BSP package first and reflash the board to apply the change in case you missed something.
Please remember that you may also need to update the nvidia oot modules.

Okay, please simply run jetson-io to enable SPI1 functions from 40-pins expansion header.

This workflow looks good to me.

1 Like

Great, thank you so much for the reply!

I’ll go through the workflow including the suggested BSP & oot updates and will report back here.

It worked!!!

Thank you so much for your support @nagesh_accord and @KevinFFF, really appreciated.

As a sidenote. There was a wiring issue on our sensor as well.
The CS pin was not using the SPI1_CS0 or SPI1_CS1 as chipselect, but CAN1_DIN. With Jetpack 5 this just worked anyways. With Jetpack 6 not, even after activating CAN1. This is just a note in case someone else does a similar mistake and obviously this is 100% my fault. A quick rebuild of the sensor circuit and adjustment of our code to use the SPI1_CS0 pin instead fixed it.

I described this troubleshooting process before as jumping through a labyrinth of falling rocks. I still stand by this statement. However, it is clear to me now that I am the leading architect.

This means, I probably fixed the kernel patch issue with the first flash. Ouch.
However, I tried flashing this so many times, that I ended up writing a script. It can be executed on the host (of course Ubuntu 20.04/22.04).

I’ll share it here just in case someone finds it useful.

The script also compiles spidev_test from the kernel tools and runs it on the jetson after the flash.

I added a free license to include a no-warranty statement. I couldn’t find a license for the patch snippet in the original post… is there a license for it? Can I include it in a script like this?
If not, please let me know and I’ll remove it.

#!/usr/bin/env bash

# This is free and unencumbered software released into the public domain.
#
# Anyone is free to copy, modify, publish, use, compile, sell, or
# distribute this software, either in source code form or as a compiled
# binary, for any purpose, commercial or non-commercial, and by any
# means.
#
# In jurisdictions that recognize copyright laws, the author or authors
# of this software dedicate any and all copyright interest in the
# software to the public domain. We make this dedication for the benefit
# of the public at large and to the detriment of our heirs and
# successors. We intend this dedication to be an overt act of
# relinquishment in perpetuity of all present and future rights to this
# software under copyright law.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
# IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
# ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE.
#
# For more information, please refer to <https://unlicense.org/>
#
# exception to this license is the patch snippet itself
# which is taken from here: https://forums.developer.nvidia.com/t/40hdr-spi1-gpio-padctl-register-bit-10-effect-by-gpiod-tools-in-jp6/301171/20


#################### UTILITIES

function yes_or_no {
  while true; do
    read -p "$* [y/n]: " yn
    case $yn in
      [Yy]*) return 0  ;;
      [Nn]*) echo "Aborted" ; return  1 ;;
    esac
  done
}

BOLD_RED='\033[1;31m'
BOLD_GREEN='\033[1;32m'
BOLD_BLUE='\033[1;34m'
BOLD_PURPLE='\033[1;35m'
BOLD_WHITE='\033[1;37m'
NC='\033[0m' # No Color

function dangerror(){
    echo -e "$BOLD_RED$1$NC"
}

N_PROGRESS="$(cat "$0" | grep "^progress" | wc -l)"
I_PROGRESS=1

function progress(){
    echo ""
    echo -e "${BOLD_WHITE}#### ${BOLD_BLUE}${1} ${BOLD_WHITE}PROGRESS => ${2}${NC} ($I_PROGRESS/$N_PROGRESS)"
    echo ""
    I_PROGRESS=$(( I_PROGRESS + 1 ))
}

DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
PREVIOUS_DIR="$(pwd)"

#################### SCRIPT PARAMETERS

# ours
TARGET_USERNAME="${TARGET_USERNAME:-user}"
TARGET_PASSWORD="${TARGET_PASSWORD:-noreallyoushouldthinkofagoodpasswordthisisnotagoodpassword}"
TARGET_HOSTNAME="${TARGET_HOSTNAME:-nvidia}"
TOOLCHAIN_DIR="${TOOLCHAIN_DIR:-"$HOME/l4t-gcc"}"
L4T_ROOT_DIR="${L4T_ROOT_DIR:-"$HOME/l4t-patched"}"
JETPACK_VERSION="${JETPACK_VERSION:-"36.4.4"}" # if you change this, you need to search for other things to change in the script
FLASH_STORAGE="${FLASH_STORAGE:-"nvme0n1p1"}"
# resources
# note: if changing URLs, please adjust verification below
TOOLCHAIN_URL="https://developer.nvidia.com/downloads/embedded/l4t/r36_release_v3.0/toolchain/aarch64--glibc--stable-2022.08-1.tar.bz2"
BSP_URL="https://developer.nvidia.com/downloads/embedded/l4t/r36_release_v4.4/release/Jetson_Linux_r36.4.4_aarch64.tbz2"
ROOTFS_URL="https://developer.nvidia.com/downloads/embedded/l4t/r36_release_v4.4/release/Tegra_Linux_Sample-Root-Filesystem_r36.4.4_aarch64.tbz2"
# derived nvidia
export CROSS_COMPILE="$TOOLCHAIN_DIR/aarch64--glibc--stable-2022.08-1/bin/aarch64-buildroot-linux-gnu-"
export KERNEL_HEADERS="$L4T_ROOT_DIR/Linux_for_Tegra/source/kernel/kernel-jammy-src"
export INSTALL_MOD_PATH="$L4T_ROOT_DIR/Linux_for_Tegra/rootfs/"

progress "$0" "setting parameters"

echo "collected environment variables:"
echo "TARGET_USERNAME: $TARGET_USERNAME"
echo "TARGET_PASSWORD: $TARGET_PASSWORD"
echo "TARGET_HOSTNAME: $TARGET_HOSTNAME"
echo "TOOLCHAIN_DIR: $TOOLCHAIN_DIR"
echo "L4T_ROOT_DIR: $L4T_ROOT_DIR"
echo "JETPACK_VERSION: $JETPACK_VERSION"
echo "FLASH_STORAGE: $FLASH_STORAGE"
echo -e "${BOLD_PURPLE}"
echo "overwrite environment variables like:"
echo "$ export TARGET_USERNAME=henk; export TARGET_PASSWORD=dev123; $0"
echo -e "${NC}"
echo "resources:"
echo "TOOLCHAIN_URL: $TOOLCHAIN_URL"
echo "BSP_URL: $BSP_URL"
echo "ROOTFS_URL: $ROOTFS_URL"
echo ""
echo "derived:"
echo "CROSS_COMPILE: $CROSS_COMPILE"
echo "KERNEL_HEADERS: $KERNEL_HEADERS"
echo "INSTALL_MOD_PATH: $INSTALL_MOD_PATH"
echo ""
echo "assumptions:"
echo "- you are running this on the host, NOT the jetson"
echo "- there is enough disk space available (~40GB)"
echo "- jetson agx orin is attached to this host computer via usb"
echo "- jetson agx orin is in recovery mode"
echo "- user executing this script ($USER) has sudo rights and you know the password"
echo ""
dangerror "above assumptions will not be verified by the script"
echo ""

yes_or_no "does this look good?" || exit 1

#################### ACTION

progress "$0" "ensure toolchain"

if [ -d "$TOOLCHAIN_DIR" ];
then
    echo "-> toolchain directory found"
else
    mkdir -p "$TOOLCHAIN_DIR"
    echo "-> toolchain directory created"
fi

if [ -f "${CROSS_COMPILE}gcc" ];
then
    echo "-> toolchain found"
else
    echo "-> toolchain not found, downloading"
    cd "$TOOLCHAIN_DIR"
    wget "$TOOLCHAIN_URL"
    tar xf "$(basename "$TOOLCHAIN_URL")"
    rm "$(basename "$TOOLCHAIN_URL")"
fi

if "${CROSS_COMPILE}gcc" --version | grep -q "aarch64-buildroot-linux-gnu-gcc.br_real (Buildroot 2022.08) 11.3.0";
then
    echo "-> toolchain verified"
else
    dangerror "ERROR, could not verify toolchain"
    exit 1
fi

progress "$0" "ensure bsp driver package"

if [ -d "$L4T_ROOT_DIR" ];
then
    echo "-> l4t root directory found"
else
    mkdir -p "$L4T_ROOT_DIR"
    echo "-> l4t root directory created"
fi

if [ -d "$L4T_ROOT_DIR/Linux_for_Tegra" ];
then
    echo "-> bsp driver package directory found"
else
    echo "-> bsp driver package not found, downloading"
    cd "$L4T_ROOT_DIR"
    wget "$BSP_URL"
    tar xf "$(basename "$BSP_URL")"
    rm "$(basename "$BSP_URL")"
fi

if "${L4T_ROOT_DIR}"/Linux_for_Tegra/flash.sh -Z | grep -q "Usage: sudo ./flash.sh \[options\] <target_board> <rootdev>";
then
    echo "-> bsp driver package verified"
else
    dangerror "ERROR, could not verify bsp driver package"
    exit 1
fi

progress "$0" "ensure l4t flash prerequisites"

if [ ! -f "$L4T_ROOT_DIR/installed_flash_prerequisites" ];
then
    sudo $L4T_ROOT_DIR/Linux_for_Tegra/tools/l4t_flash_prerequisites.sh
    touch "$L4T_ROOT_DIR/installed_flash_prerequisites"
else
    echo "-> flash requisites seem to be installed "
fi

progress "$0" "ensure root file system"

if [ ! -d "$L4T_ROOT_DIR/Linux_for_Tegra/rootfs" ];
then
    dangerror "root file system directory not found, it should have been created with bsp driver package"
    exit 1
fi

if [ "$(ls -A "$L4T_ROOT_DIR/Linux_for_Tegra/rootfs" | wc -l)" -gt 1 ];
then
    echo "-> root file system directory somewhat verified, as it is not empty"
else
    echo "-> root file system directory empty"
    cd "$L4T_ROOT_DIR/Linux_for_Tegra/rootfs"
    sudo wget "$ROOTFS_URL"
    sudo tar xpf "$(basename "$ROOTFS_URL")"
    sudo rm "$(basename "$ROOTFS_URL")"
fi

progress "$0" "ensure l4t kernel sources"

if [ "$(ls -A "$L4T_ROOT_DIR/Linux_for_Tegra/source/kernel" | wc -l)" -gt 1 ];
then
    echo "-> l4t kernel sources somewhat verified, as directory is not empty"
else
    cd "$L4T_ROOT_DIR/Linux_for_Tegra/source"
    ./source_sync.sh -k -t jetson_36.4.4
fi

progress "$0" "apply binaries"

if [ -f "$L4T_ROOT_DIR/Linux_for_Tegra/rootfs/var/lib/dpkg/triggers/nv-update-initrd" ];
then
    echo "-> binaries seem to be applied"
else
    cd "$L4T_ROOT_DIR/Linux_for_Tegra"
    sudo ./apply_binaries.sh
fi

progress "$0" "patch l4t kernel sources"

if [ -f "$L4T_ROOT_DIR/Linux_for_Tegra/source/gpio_jp6_patch" ];
then
    echo "-> patch seems to be already applied, as the patch file exists"
else
    cd "$L4T_ROOT_DIR/Linux_for_Tegra/source"

##### PATCH SNIPPET BEGIN

    echo "diff --git a/drivers/pinctrl/tegra/pinctrl-tegra.c b/drivers/pinctrl/tegra/pinctrl-tegra.c
index 953a3a650a9e..7022faa2a7ce 100644
--- a/drivers/pinctrl/tegra/pinctrl-tegra.c
+++ b/drivers/pinctrl/tegra/pinctrl-tegra.c
@@ -281,7 +281,7 @@ static int tegra_pinctrl_set_mux(struct pinctrl_dev *pctldev,
 
 
 static const struct tegra_pingroup *tegra_pinctrl_get_group(struct pinctrl_dev *pctldev,
-                                                       unsigned int offset)
+					unsigned int offset, struct tegra_pingroup_config **config)
 {
 		struct tegra_pmx *pmx = pinctrl_dev_get_drvdata(pctldev);
 		unsigned int group, num_pins, j;
@@ -293,9 +293,12 @@ static const struct tegra_pingroup *tegra_pinctrl_get_group(struct pinctrl_dev *
 			if (ret < 0)
 				continue;
 			for (j = 0; j < num_pins; j++) {
-				if (offset == pins[j])
+				if (offset == pins[j]) {
+					if (config)
+						*config = &pmx->pingroup_configs[group];
 
 				return &pmx->soc->groups[group];
+				}
 			}
 		}
 
@@ -309,12 +312,14 @@ static int tegra_pinctrl_gpio_request_enable(struct pinctrl_dev *pctldev,
 {
 	struct tegra_pmx *pmx = pinctrl_dev_get_drvdata(pctldev);
 	const struct tegra_pingroup *group;
+	struct tegra_pingroup_config *config;
 	u32 value;
 
 	if (!pmx->soc->sfsel_in_mux)
 		return 0;
 
-	group = tegra_pinctrl_get_group(pctldev, offset);
+	group = tegra_pinctrl_get_group(pctldev, offset, &config);
+
 	if (!group)
 		return -EINVAL;
 
@@ -323,6 +328,7 @@ static int tegra_pinctrl_gpio_request_enable(struct pinctrl_dev *pctldev,
 		return -EINVAL;
 
 	value = pmx_readl(pmx, group->mux_bank, group->mux_reg);
+	config->is_sfsel = (value & BIT(group->sfsel_bit)) != 0;
 	value &= ~BIT(group->sfsel_bit);
 	pmx_writel(pmx, value, group->mux_bank, group->mux_reg);
 
@@ -335,12 +341,14 @@ static void tegra_pinctrl_gpio_disable_free(struct pinctrl_dev *pctldev,
 {
 	struct tegra_pmx *pmx = pinctrl_dev_get_drvdata(pctldev);
 	const struct tegra_pingroup *group;
+	struct tegra_pingroup_config *config;
 	u32 value;
 
 	if (!pmx->soc->sfsel_in_mux)
 		return;
 
-	group = tegra_pinctrl_get_group(pctldev, offset);
+	group = tegra_pinctrl_get_group(pctldev, offset, &config);
+
 	if (!group)
 		return;
 
@@ -348,7 +356,8 @@ static void tegra_pinctrl_gpio_disable_free(struct pinctrl_dev *pctldev,
 		return;
 
 	value = pmx_readl(pmx, group->mux_bank, group->mux_reg);
-	value |= BIT(group->sfsel_bit);
+	if (config->is_sfsel)
+		value |= BIT(group->sfsel_bit);
 	pmx_writel(pmx, value, group->mux_bank, group->mux_reg);
 }
 
@@ -801,6 +810,11 @@ int tegra_pinctrl_probe(struct platform_device *pdev,
 	pmx->dev = &pdev->dev;
 	pmx->soc = soc_data;
 
+	pmx->pingroup_configs = devm_kcalloc(&pdev->dev,
+		pmx->soc->ngroups, sizeof(*pmx->pingroup_configs), GFP_KERNEL);
+	if (!pmx->pingroup_configs)
+		return -ENOMEM;
+
 	/*
 	 * Each mux group will appear in 4 functions' list of groups.
 	 * This over-allocates slightly, since not all groups are mux groups.

diff --git a/drivers/pinctrl/tegra/pinctrl-tegra.h b/drivers/pinctrl/tegra/pinctrl-tegra.h
index 216cc59b62b4..a47ac519f3ec 100644
--- a/drivers/pinctrl/tegra/pinctrl-tegra.h
+++ b/drivers/pinctrl/tegra/pinctrl-tegra.h
@@ -8,6 +8,10 @@
 #ifndef __PINMUX_TEGRA_H__
 #define __PINMUX_TEGRA_H__
 
+struct tegra_pingroup_config {
+	bool is_sfsel;
+};
+
 struct tegra_pmx {
 	struct device *dev;
 	struct pinctrl_dev *pctl;
@@ -21,6 +25,8 @@ struct tegra_pmx {
 	int nbanks;
 	void __iomem **regs;
 	u32 *backup_regs;
+	/* Array of size soc->ngroups */
+	struct tegra_pingroup_config *pingroup_configs;
 };
 
 enum tegra_pinconf_param {
-- " > gpio_jp6_patch

##### PATCH SNIPPET END

    cd kernel/kernel-jammy-src
    patch -p1 < ../../gpio_jp6_patch
fi

progress "$0" "build jetson linux kernel"

if md5sum "$L4T_ROOT_DIR/Linux_for_Tegra/kernel/Image" | grep -q "1d635647bfd6f32e9cc9d70475a5d3fb";
then
    echo "-> kernel image seems to be unpatched, let's build the patched kernel image"
    cd "$L4T_ROOT_DIR/Linux_for_Tegra/source"
    make -C kernel
    sudo -E make install -C kernel
    cp kernel/kernel-jammy-src/arch/arm64/boot/Image ../kernel/Image
else
    echo "-> patched kernel image seems to be already built and installed on host"
fi

progress "$0" "build out-of-tree modules"

if md5sum "$L4T_ROOT_DIR/Linux_for_Tegra/bootloader/l4t_initrd.img" | grep -q "41ea081480cf95980e454dc41260c4d2";
then
    echo "-> l4t_initrd.img seems to be untouched, let's build the out-of-tree modules"
    cd "$L4T_ROOT_DIR/Linux_for_Tegra/source"
    make modules
    sudo -E make modules_install
    cd "$L4T_ROOT_DIR/Linux_for_Tegra"
    sudo ./tools/l4t_update_initrd.sh
else
    echo "-> out-of-tree modules seem already built and installed in l4t_initrd.img on host"
fi

progress "$0" "build dtbs"

if [ "$(ls "$L4T_ROOT_DIR/Linux_for_Tegra/kernel/dtb" | wc -l)" -eq 85 ];
then
    echo "-> DTBs don't seem to be built and installed"
    cd "$L4T_ROOT_DIR/Linux_for_Tegra/source"
    make dtbs
    cp kernel-devicetree/generic-dts/dtbs/* ../kernel/dtb/
else
    echo "-> dtbs seem to be already built and installed"
fi

progress "$0" "customize operating system"

if [ ! -d "$L4T_ROOT_DIR/Linux_for_Tegra/rootfs/home/$TARGET_USERNAME" ];
then
    cd "$L4T_ROOT_DIR/Linux_for_Tegra/tools"
    sudo ./l4t_create_default_user.sh -u $TARGET_USERNAME -p $TARGET_PASSWORD -a -n $TARGET_HOSTNAME --accept-license
else
    echo "-> default user ($TARGET_USERNAME) seems to be already created"
fi

if grep "$L4T_ROOT_DIR/Linux_for_Tegra/rootfs/etc/group" -nr -e "dialout" | grep -q "$TARGET_USERNAME$";
then
    echo "-> default user ($TARGET_USERNAME) already part of group dialout"
else
    cd "$L4T_ROOT_DIR/Linux_for_Tegra/rootfs"
    sudo sed "s/15:dialout:x:20:/15:dialout:x:20:$TARGET_USERNAME/g" -i ./etc/group
fi

if grep "$L4T_ROOT_DIR/Linux_for_Tegra/rootfs/etc/group" -nr -e "gpio" | grep -q "$TARGET_USERNAME$";
then
    echo "-> default user ($TARGET_USERNAME) already part of group gpio"
else
    cd "$L4T_ROOT_DIR/Linux_for_Tegra/rootfs"
    sudo sed "s/75:gpio:x:999:/75:gpio:x:999:$TARGET_USERNAME/g" -i ./etc/group
fi

if grep "$L4T_ROOT_DIR/Linux_for_Tegra/rootfs/etc/modules" -nr -e "^spidev$";
then
    echo "-> spidev already in /etc/modules"
else
    cd "$L4T_ROOT_DIR/Linux_for_Tegra/rootfs"
    echo "spidev" | sudo tee --append ./etc/modules
fi

progress "$0" "compile spidev_test"

if [ ! -f "$L4T_ROOT_DIR/Linux_for_Tegra/rootfs/usr/local/bin/spidev_test" ];
then
    cd "$L4T_ROOT_DIR/Linux_for_Tegra/source/kernel/kernel-jammy-src/tools/spi"
    CROSS_COMPILE="$CROSS_COMPILE" make && \
    sudo mv ./spidev_test "$L4T_ROOT_DIR/Linux_for_Tegra/rootfs/usr/local/bin/spidev_test" || \
    dangerror "could not compile spidev_test program"
else
    echo "-> already compiled spidev_test"
fi

progress "$0" "flash on jetson"

if lsusb | grep -q "0955:7023 NVIDIA Corp. APX";
then
    echo "-> recognized AGX Orin in recovery mode via USB"
else
    echo "-> did not recognize AGX Orin in recovery mode via USB"
    echo "-> you are not ready to flash until you connect your AGX Orin via USB and set it in recovery mode"
    echo "-> let's see if AGX Orin is connected via USB in normal mode.."
    if lsusb | grep -q "0955:7020 NVIDIA Corp. L4T";
    then
        echo "-> recognized AGX Orin in normal mode via USB"
        if yes_or_no "should we attempt to reboot into recovery mode?";
        then
            ssh -t -o ServerAliveInterval=1 -o ServerAliveCountMax=1 $TARGET_USERNAME@192.168.55.1 sudo -S reboot -f forced-recovery
        fi
    else
        echo "-> did not recognize AGX Orin in normal mode via USB"
    fi
    while sleep 1;
    do
        if lsusb | grep -q "0955:7023 NVIDIA Corp. APX";
        then
            echo "-> recognized AGX Orin in recovery mode via USB"
            break;
        else
            echo "-> waiting for AGX Orin in recovery mode connected via USB"
        fi
    done
fi

yes_or_no "ready to flash?" || exit

cd "$L4T_ROOT_DIR/Linux_for_Tegra"
sudo ./nvsdkmanager_flash.sh --storage $FLASH_STORAGE

yes_or_no "please wait until the jetson finished rebooting and then say 'yes'" || exit

progress "$0" "enable SPI1 with jetson-io.py"

ssh -t -o ServerAliveInterval=1 -o ServerAliveCountMax=1 "$TARGET_USERNAME@192.168.55.1" sudo /opt/nvidia/jetson-io/jetson-io.py

progress "$0" "verify SPI1 with spidev_test"

echo ""
echo "-> in order to run the test program, you need to connect input and output to create a loop"
echo "please connect pin 19 (SP1_MOSI) and pin 21 (SPI1_MISO)"
echo ""
echo "pinout: https://jetsonhacks.com/nvidia-jetson-agx-orin-gpio-header-pinout/"
echo ""
yes_or_no "please wait until the jetson finished rebooting and then say 'yes'" || exit

ssh -t -o ServerAliveInterval=1 -o ServerAliveCountMax=1 "$TARGET_USERNAME@192.168.55.1" sudo /usr/local/bin/spidev_test -v -D /dev/spidev0.0 -p doesitwork

echo ""
echo "D O N E ?"
echo ""
cd "$PREVIOUS_DIR"
2 Likes

Thank you for sharing the script and I think it should be fine.