GPIO Permission error through Docker

Hello, I’ve managed to get GPIO to work great outside the docker enviroment, however when I am trying to run access the GPIO pins from within Docker I keep getting this error:

admin@ubuntu:/workspaces/isaac_ros-dev/src/custom_stuff/src$ ros2 run custom_stuff trhuster.py 
Traceback (most recent call last):
  File "/workspaces/isaac_ros-dev/install/custom_stuff/lib/custom_stuff/trhuster.py", line 4, in <module>
    import Jetson.GPIO as GPIO
  File "/home/admin/.local/lib/python3.8/site-packages/Jetson/GPIO/__init__.py", line 1, in <module>
    from .gpio import *
  File "/home/admin/.local/lib/python3.8/site-packages/Jetson/GPIO/gpio.py", line 33, in <module>
    raise RuntimeError("The current user does not have permissions set to access the library functionalites. Please configure permissions or use the root user to run this. It is also possible that {} does not exist. Please check if that file is present.".format(_GPIOCHIP_ROOT))
RuntimeError: The current user does not have permissions set to access the library functionalites. Please configure permissions or use the root user to run this. It is also possible that /dev/gpiochip0 does not exist. Please check if that file is present.
[ros2run]: Process exited with failure 1
admin@ubuntu:/workspaces/isaac_ros-dev/src/custom_stuff/src$ 

I’ve modified my dockerfile to include the following:

# Run container from image
print_info "Running $CONTAINER_NAME"
docker run -it --rm \
    --privileged \
    --network host \
    ${DOCKER_ARGS[@]} \
    -v $ISAAC_ROS_DEV_DIR:/workspaces/isaac_ros-dev \
    -v /dev/*:/dev/* \
    -v /etc/localtime:/etc/localtime:ro \
    -v /proc/device-tree/compatible:/proc/device-tree/compatible \
    -v /proc/device-tree/chosen:/proc/device-tree/chosen \
    --device /dev/gpiochip0 \
    --name "$CONTAINER_NAME" \
    --runtime nvidia \
    --user="admin" \
    --entrypoint /usr/local/bin/scripts/workspace-entrypoint.sh \
    --workdir /workspaces/isaac_ros-dev \
    $@ \
    $BASE_NAME \
    /bin/bash

As per the guide here: GitHub - NVIDIA/jetson-gpio: A Python library that enables the use of Jetson's GPIOs

But I am still unable to get access to the GPIO pins within the docker environment.

Update:

Running this command seemed to have solved it:

admin@ubuntu:/workspaces/isaac_ros-dev/src/custom_stuff/src$ sudo chmod +777 /dev/gpiochip0

However I’ve now run into the following issue:

admin@ubuntu:/workspaces/isaac_ros-dev/src/custom_stuff/src$ python3 thruster_test.py 
/home/admin/.local/lib/python3.8/site-packages/Jetson/GPIO/gpio_event.py:182: RuntimeWarning: Event not found
  warnings.warn("Event not found", RuntimeWarning)
^CTraceback (most recent call last):
  File "thruster_test.py", line 14, in <module>
    pwm = GPIO.PWM(pwmPin, 50)  # 50 Hz frequency
  File "/home/admin/.local/lib/python3.8/site-packages/Jetson/GPIO/gpio.py", line 595, in __init__
    _export_pwm(self._ch_info)
  File "/home/admin/.local/lib/python3.8/site-packages/Jetson/GPIO/gpio.py", line 196, in _export_pwm
    time.sleep(0.01)
KeyboardInterrupt

The code works just fine outside the docker enviroment

Hi navier,

Are you using the devkit or custom board for AGX Orin?
What’s your Jetpack version in use?

Do you mean that you run the docker on AGX Orin and hit the issue?

If so, why you don’t just run the code on AGX Orin w/o docker?

Hello KevinFFF,

Thank you for your swift response.

We are working with the Jetson AGX Orin Developer Kit and currently using JetPack Version 5.1.2-b104. Below are the system details and configurations:

navier_orin@ubuntu:~$ cat /etc/nv_boot_control.conf
TNSPEC 3701-500-0000-K.0-1-1-jetson-agx-orin-devkit-
COMPATIBLE_SPEC 3701-300-0000--1--jetson-agx-orin-devkit-
TEGRA_LEGACY_UPDATE false
TEGRA_BOOT_STORAGE nvme0n1
TEGRA_EMMC_ONLY false
TEGRA_CHIPID 0x23
TEGRA_OTA_BOOT_DEVICE /dev/mtdblock0
TEGRA_OTA_GPT_DEVICE /dev/mtdblock0

navier_orin@ubuntu:~$ sudo apt show nvidia-jetpack
Package: nvidia-jetpack
Version: 5.1.2-b104
Priority: standard
Section: metapackages
Maintainer: NVIDIA Corporation
Installed-Size: 199 kB
Depends: nvidia-jetpack-runtime (= 5.1.2-b104), nvidia-jetpack-dev (= 5.1.2-b104)
Homepage: http://developer.nvidia.com/jetson
Download-Size: 29.3 kB
APT-Sources: https://repo.download.nvidia.com/jetson/common r35.4/main arm64 Packages
Description: NVIDIA Jetpack Meta Package

navier_orin@ubuntu:~$ cat /etc/nv_tegra_release
# R35 (release), REVISION: 4.1, GCID: 33958178, BOARD: t186ref, EABI: aarch64, DATE: Tue Aug  1 19:57:35 UTC 2023

We are attempting to integrate Isaac ROS to leverage the hardware capabilities for our autonomous boat project, particularly to send PWM commands to the thruster via the GPIO pins. This functionality is a critical component of our larger ROS2 ecosystem.

We have followed the guide here (NVIDIA Isaac ROS — isaac_ros_docs documentation) in order to install the Docker deployment of Isaac ROS ( and everything works great, however we’re encountering problems when attempting to access the GPIO pins within the Docker container of Isaac ROS, where we receive the errors as explained in the original post.

While installing ROS2 directly on the host could be a workaround, we strongly prefer to encapsulate all our services within Docker containers to streamline deployment processes.

Let me know if you require any more information.

It seems the environment issue in container.

Could you share the detailed reproduce steps to setup docker on the devkit and run the application to hit “Event not found issue”?
and we could verify this issue locally.

Hello,

I’ve followed the guide here: Compute Setup — isaac_ros_docs documentation

Here is my run_dev.sh

#!/bin/bash
#
# Copyright (c) 2021-2022, NVIDIA CORPORATION.  All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.

ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source $ROOT/utils/print_color.sh

function usage() {
    print_info "Usage: run_dev.sh" {isaac_ros_dev directory path OPTIONAL}
    print_info "Copyright (c) 2021-2022, NVIDIA CORPORATION."
}

# Read and parse config file if exists
#
# CONFIG_IMAGE_KEY (string, can be empty)

if [[ -f "${ROOT}/.isaac_ros_common-config" ]]; then
    . "${ROOT}/.isaac_ros_common-config"
fi

ISAAC_ROS_DEV_DIR="$1"
if [[ -z "$ISAAC_ROS_DEV_DIR" ]]; then
    ISAAC_ROS_DEV_DIR_DEFAULTS=("$HOME/workspaces/isaac_ros-dev" "/workspaces/isaac_ros-dev")
    for ISAAC_ROS_DEV_DIR in "${ISAAC_ROS_DEV_DIR_DEFAULTS[@]}"
    do
        if [[ -d "$ISAAC_ROS_DEV_DIR" ]]; then
            break
        fi
    done

    if [[ ! -d "$ISAAC_ROS_DEV_DIR" ]]; then
        ISAAC_ROS_DEV_DIR=$(realpath "$ROOT/../../")
    fi
    print_warning "isaac_ros_dev not specified, assuming $ISAAC_ROS_DEV_DIR"
else
    if [[ ! -d "$ISAAC_ROS_DEV_DIR" ]]; then
        print_error "Specified isaac_ros_dev does not exist: $ISAAC_ROS_DEV_DIR"
        exit 1
    fi
    shift 1
fi

ON_EXIT=()
function cleanup {
    for command in "${ON_EXIT[@]}"
    do
        $command
    done
}
trap cleanup EXIT

pushd . >/dev/null
cd $ROOT
ON_EXIT+=("popd")

# Prevent running as root.
if [[ $(id -u) -eq 0 ]]; then
    print_error "This script cannot be executed with root privileges."
    print_error "Please re-run without sudo and follow instructions to configure docker for non-root user if needed."
    exit 1
fi

# Check if user can run docker without root.
RE="\<docker\>"
if [[ ! $(groups $USER) =~ $RE ]]; then
    print_error "User |$USER| is not a member of the 'docker' group and cannot run docker commands without sudo."
    print_error "Run 'sudo usermod -aG docker \$USER && newgrp docker' to add user to 'docker' group, then re-run this script."
    print_error "See: https://docs.docker.com/engine/install/linux-postinstall/"
    exit 1
fi

# Check if able to run docker commands.
if [[ -z "$(docker ps)" ]] ;  then
    print_error "Unable to run docker commands. If you have recently added |$USER| to 'docker' group, you may need to log out and log back in for it to take effect."
    print_error "Otherwise, please check your Docker installation."
    exit 1
fi

# Check if git-lfs is installed.
git lfs &>/dev/null
if [[ $? -ne 0 ]] ; then
    print_error "git-lfs is not insalled. Please make sure git-lfs is installed before you clone the repo."
    exit 1
fi

# Check if all LFS files are in place in the repository where this script is running from.
cd $ROOT
git rev-parse &>/dev/null
if [[ $? -eq 0 ]]; then
    LFS_FILES_STATUS=$(cd $ISAAC_ROS_DEV_DIR && git lfs ls-files | cut -d ' ' -f2)
    for (( i=0; i<${#LFS_FILES_STATUS}; i++ )); do
        f="${LFS_FILES_STATUS:$i:1}"
        if [[ "$f" == "-" ]]; then
            print_error "LFS files are missing. Please re-clone the repo after installing git-lfs."
            exit 1
        fi
    done
fi

PLATFORM="$(uname -m)"

BASE_NAME="isaac_ros_dev-$PLATFORM"
CONTAINER_NAME="$BASE_NAME-container"

# Remove any exited containers.
if [ "$(docker ps -a --quiet --filter status=exited --filter name=$CONTAINER_NAME)" ]; then
    docker rm $CONTAINER_NAME > /dev/null
fi

# Re-use existing container.
if [ "$(docker ps -a --quiet --filter status=running --filter name=$CONTAINER_NAME)" ]; then
    print_info "Attaching to running container: $CONTAINER_NAME"
    docker exec -i -t -u admin --workdir /workspaces/isaac_ros-dev $CONTAINER_NAME /bin/bash $@
    exit 0
fi

# Build image
IMAGE_KEY=ros2_humble
if [[ ! -z "${CONFIG_IMAGE_KEY}" ]]; then
    IMAGE_KEY=$CONFIG_IMAGE_KEY
fi

BASE_IMAGE_KEY=$PLATFORM.user
if [[ ! -z "${IMAGE_KEY}" ]]; then
    BASE_IMAGE_KEY=$PLATFORM.$IMAGE_KEY

    # If the configured key does not have .user, append it last
    if [[ $IMAGE_KEY != *".user"* ]]; then
        BASE_IMAGE_KEY=$BASE_IMAGE_KEY.user
    fi
fi

print_info "Building $BASE_IMAGE_KEY base as image: $BASE_NAME using key $BASE_IMAGE_KEY"
$ROOT/build_base_image.sh $BASE_IMAGE_KEY $BASE_NAME '' '' ''

if [ $? -ne 0 ]; then
    print_error "Failed to build base image: $BASE_NAME, aborting."
    exit 1
fi

# Map host's display socket to docker
DOCKER_ARGS+=("-v /tmp/.X11-unix:/tmp/.X11-unix")
DOCKER_ARGS+=("-v $HOME/.Xauthority:/home/admin/.Xauthority:rw")
DOCKER_ARGS+=("-e DISPLAY")
DOCKER_ARGS+=("-e NVIDIA_VISIBLE_DEVICES=all")
DOCKER_ARGS+=("-e NVIDIA_DRIVER_CAPABILITIES=all")
DOCKER_ARGS+=("-e FASTRTPS_DEFAULT_PROFILES_FILE=/usr/local/share/middleware_profiles/rtps_udp_profile.xml")
DOCKER_ARGS+=("-e ROS_DOMAIN_ID")
DOCKER_ARGS+=("-e USER")

if [[ $PLATFORM == "aarch64" ]]; then
    DOCKER_ARGS+=("-v /usr/bin/tegrastats:/usr/bin/tegrastats")
    DOCKER_ARGS+=("-v /tmp/argus_socket:/tmp/argus_socket")
    DOCKER_ARGS+=("-v /usr/local/cuda-11.4/targets/aarch64-linux/lib/libcusolver.so.11:/usr/local/cuda-11.4/targets/aarch64-linux/lib/libcusolver.so.11")
    DOCKER_ARGS+=("-v /usr/local/cuda-11.4/targets/aarch64-linux/lib/libcusparse.so.11:/usr/local/cuda-11.4/targets/aarch64-linux/lib/libcusparse.so.11")
    DOCKER_ARGS+=("-v /usr/local/cuda-11.4/targets/aarch64-linux/lib/libcurand.so.10:/usr/local/cuda-11.4/targets/aarch64-linux/lib/libcurand.so.10")
    DOCKER_ARGS+=("-v /usr/local/cuda-11.4/targets/aarch64-linux/lib/libcufft.so.10:/usr/local/cuda-11.4/targets/aarch64-linux/lib/libcufft.so.10")
    DOCKER_ARGS+=("-v /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnvToolsExt.so:/usr/local/cuda-11.4/targets/aarch64-linux/lib/libnvToolsExt.so")
    DOCKER_ARGS+=("-v /usr/local/cuda-11.4/targets/aarch64-linux/lib/libcupti.so.11.4:/usr/local/cuda-11.4/targets/aarch64-linux/lib/libcupti.so.11.4")
    DOCKER_ARGS+=("-v /usr/local/cuda-11.4/targets/aarch64-linux/lib/libcudla.so.1:/usr/local/cuda-11.4/targets/aarch64-linux/lib/libcudla.so.1")
    DOCKER_ARGS+=("-v /usr/local/cuda-11.4/targets/aarch64-linux/include/nvToolsExt.h:/usr/local/cuda-11.4/targets/aarch64-linux/include/nvToolsExt.h")
    DOCKER_ARGS+=("-v /usr/lib/aarch64-linux-gnu/tegra:/usr/lib/aarch64-linux-gnu/tegra")
    DOCKER_ARGS+=("-v /usr/src/jetson_multimedia_api:/usr/src/jetson_multimedia_api")
    DOCKER_ARGS+=("-v /opt/nvidia/nsight-systems-cli:/opt/nvidia/nsight-systems-cli")
    DOCKER_ARGS+=("--pid=host")
    DOCKER_ARGS+=("-v /opt/nvidia/vpi2:/opt/nvidia/vpi2")
    DOCKER_ARGS+=("-v /usr/share/vpi2:/usr/share/vpi2")

    # If jtop present, give the container access
    if [[ $(getent group jtop) ]]; then
        DOCKER_ARGS+=("-v /run/jtop.sock:/run/jtop.sock:ro")
        JETSON_STATS_GID="$(getent group jtop | cut -d: -f3)"
        DOCKER_ARGS+=("--group-add $JETSON_STATS_GID")
    fi
fi

# Optionally load custom docker arguments from file
DOCKER_ARGS_FILE="$ROOT/.isaac_ros_dev-dockerargs"
if [[ -f "$DOCKER_ARGS_FILE" ]]; then
    print_info "Using additional Docker run arguments from $DOCKER_ARGS_FILE"
    readarray -t DOCKER_ARGS_FILE_LINES < $DOCKER_ARGS_FILE
    for arg in "${DOCKER_ARGS_FILE_LINES[@]}"; do
        DOCKER_ARGS+=($(eval "echo $arg | envsubst"))
    done
fi

# Run container from image
print_info "Running $CONTAINER_NAME"
docker run -it --rm \
    --privileged \
    --network host \
    ${DOCKER_ARGS[@]} \
    -v $ISAAC_ROS_DEV_DIR:/workspaces/isaac_ros-dev \
    -v /dev/*:/dev/* \
    -v /etc/udev/rules.d:/etc/udev/rules.d \
    -v /etc/localtime:/etc/localtime:ro \
    -v /proc/device-tree/compatible:/proc/device-tree/compatible \
    -v /proc/device-tree/chosen:/proc/device-tree/chosen \
    --device /dev/gpiochip0 \
    --name "$CONTAINER_NAME" \
    --runtime nvidia \
    --user="admin" \
    --entrypoint /usr/local/bin/scripts/workspace-entrypoint.sh \
    --workdir /workspaces/isaac_ros-dev \
    $@ \
    $BASE_NAME \
    /bin/bash

Also here is the script I am trying to execute where i get the GPIO permission errors:

#!/usr/bin/env python3

import serial
import Jetson.GPIO as GPIO
import time

# Initialize serial port
ser = serial.Serial('/dev/ttyTHS0', 9600)

# Setup Jetson Nano GPIO
pwmPin = 15  # Replace with the actual PWM-capable pin
GPIO.setmode(GPIO.BOARD)
GPIO.setup(pwmPin, GPIO.OUT)
pwm = GPIO.PWM(pwmPin, 50)  # 50 Hz frequency

# Initialization sequence
print("Initializing thruster...")
pwm.start(7.5)  # Start with 7.5% duty cycle for 1500 µs to initialize thruster
time.sleep(2)  # Allow 2 seconds for initialization

# Wait for user input to proceed
input("Press Enter to proceed with thrust control...")

try:    
    # Main Loop
    while True:
        line = ser.readline().decode('utf-8').strip()  # Read line from Arduino
        print(line)  # Debugging output

        # Parse thruster direction
        if "Thruster: Forward" in line:
            pwm.ChangeDutyCycle(9.5)  # Corresponding to ~1900 µs
        elif "Thruster: Backward" in line:
            pwm.ChangeDutyCycle(5.5)  # Corresponding to ~1100 µs
        elif "Thruster: Neutral" in line:
            print("Neutral")
            pwm.ChangeDutyCycle(7.5)  # Corresponding to 1500 µs

        # Parse Auto command and implement autonomy logic
        if "Auto: On" in line:
            print("Autonomous mode activated")
            # Insert your autonomy logic here
            pwm.ChangeDutyCycle(50)  # For demonstration, 50% duty cycle; you can adjust this

finally:
    pwm.stop()
    GPIO.cleanup()  # Reset the GPIO settings

Have you referred to the following instruction in docker to setup the permissions/group for Jetson.GPIO library?
GitHub - NVIDIA/jetson-gpio: A Python library that enables the use of Jetson's GPIOs
(instead of using sudo chmod +777 /dev/gpiochip0 for permission issue)

Hello,

We have tried to follow that guide and updated the Dockerfile.user

# Copyright (c) 2022, NVIDIA CORPORATION.  All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.

ARG BASE_IMAGE
FROM ${BASE_IMAGE}

# Setup non-root admin user
ARG USERNAME=admin
ARG USER_UID=1000
ARG USER_GID=1000
ARG GPIO_GID=999

# Install prerequisites
RUN apt-get update && apt-get install -y \
        sudo \
        udev \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean

# Create GPIO group
RUN groupadd --gid ${GPIO_GID} gpio

# Reuse triton-server user as 'admin' user if exists
RUN if [ $(getent group triton-server) ]; then \
        groupmod -o --gid ${USER_GID} -n ${USERNAME} triton-server ; \
        usermod -l ${USERNAME} -u ${USER_UID} -m -d /home/${USERNAME} triton-server ; \
        mkdir -p /home/${USERNAME} ; \
        sudo chown ${USERNAME}:${USERNAME} /home/${USERNAME} ; \
    fi

# Create the 'admin' user if not already exists
RUN if [ ! $(getent passwd ${USERNAME}) ]; then \
        groupadd --gid ${USER_GID} ${USERNAME} ; \
        useradd --uid ${USER_UID} --gid ${USER_GID} -m ${USERNAME} ; \
    fi

# Update 'admin' user
RUN usermod -aG gpio,video,plugdev,sudo ${USERNAME} \
    && echo ${USERNAME} ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/${USERNAME} \
    && chmod 0440 /etc/sudoers.d/${USERNAME}


# Copy scripts
RUN mkdir -p /usr/local/bin/scripts
COPY scripts/*entrypoint.sh /usr/local/bin/scripts/
RUN  chmod +x /usr/local/bin/scripts/*.sh

# Copy middleware profiles
RUN mkdir -p /usr/local/share/middleware_profiles
COPY middleware_profiles/*profile.xml /usr/local/share/middleware_profiles/

# Install stuff
RUN python3 -m pip install -U \
        pyserial \
        Jetson.GPIO

ENV USERNAME=${USERNAME}
ENV USER_GID=${USER_GID}
ENV USER_UID=${USER_UID}

We’ve added the following:

ARG GPIO_GID=999

# Create GPIO group
RUN groupadd --gid ${GPIO_GID} gpio

# Update 'admin' user
RUN usermod -aG gpio,video,plugdev,sudo ${USERNAME} \
    && echo ${USERNAME} ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/${USERNAME} \
    && chmod 0440 /etc/sudoers.d/${USERNAME}

Running the ID commands yields the following:

admin@ubuntu:/workspaces/isaac_ros-dev/src/custom_stuff/src$ id
uid=1000(admin) gid=1000(admin) groups=1000(admin),27(sudo),44(video),46(plugdev),999(gpio)

We’ve also tried running the setup.py install from Jetson.GPIO library within the contaienr but we still get permission errors.

admin@ubuntu:/workspaces/isaac_ros-dev/src/custom_stuff/jetson-gpio$ python3 setup.py install
running install
error: can't create or remove files in install directory

The following error occurred while trying to add or remove files in the
installation directory:

    [Errno 13] Permission denied: '/usr/lib/python3.8/site-packages'

The installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:

    /usr/lib/python3.8/site-packages/

This directory does not currently exist.  Please create it and try again, or
choose a different installation directory (using the -d or --install-dir
option).

admin@ubuntu:/workspaces/isaac_ros-dev/src/custom_stuff/jetson-gpio$ sudo python3 setup.py install
running install
/usr/local/lib/python3.8/dist-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
/usr/local/lib/python3.8/dist-packages/setuptools/command/easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py:123: PkgResourcesDeprecationWarning: 2.20.2ubuntu2 is an invalid version and will not be supported in a future release
  warnings.warn(
running bdist_egg
running egg_info
writing lib/python/Jetson.GPIO.egg-info/PKG-INFO
writing dependency_links to lib/python/Jetson.GPIO.egg-info/dependency_links.txt
writing top-level names to lib/python/Jetson.GPIO.egg-info/top_level.txt
reading manifest file 'lib/python/Jetson.GPIO.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'LICENCE.txt'
adding license file 'LICENSE.txt'
writing manifest file 'lib/python/Jetson.GPIO.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-aarch64/egg
running install_lib
running build_py
creating build/bdist.linux-aarch64/egg
creating build/bdist.linux-aarch64/egg/Jetson
copying build/lib/Jetson/__init__.py -> build/bdist.linux-aarch64/egg/Jetson
creating build/bdist.linux-aarch64/egg/Jetson/GPIO
copying build/lib/Jetson/GPIO/gpio_cdev.py -> build/bdist.linux-aarch64/egg/Jetson/GPIO
copying build/lib/Jetson/GPIO/__init__.py -> build/bdist.linux-aarch64/egg/Jetson/GPIO
copying build/lib/Jetson/GPIO/99-gpio.rules -> build/bdist.linux-aarch64/egg/Jetson/GPIO
copying build/lib/Jetson/GPIO/gpio_event.py -> build/bdist.linux-aarch64/egg/Jetson/GPIO
copying build/lib/Jetson/GPIO/gpio.py -> build/bdist.linux-aarch64/egg/Jetson/GPIO
copying build/lib/Jetson/GPIO/gpio_pin_data.py -> build/bdist.linux-aarch64/egg/Jetson/GPIO
creating build/bdist.linux-aarch64/egg/RPi
copying build/lib/RPi/__init__.py -> build/bdist.linux-aarch64/egg/RPi
creating build/bdist.linux-aarch64/egg/RPi/GPIO
copying build/lib/RPi/GPIO/__init__.py -> build/bdist.linux-aarch64/egg/RPi/GPIO
byte-compiling build/bdist.linux-aarch64/egg/Jetson/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.linux-aarch64/egg/Jetson/GPIO/gpio_cdev.py to gpio_cdev.cpython-38.pyc
byte-compiling build/bdist.linux-aarch64/egg/Jetson/GPIO/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.linux-aarch64/egg/Jetson/GPIO/gpio_event.py to gpio_event.cpython-38.pyc
byte-compiling build/bdist.linux-aarch64/egg/Jetson/GPIO/gpio.py to gpio.cpython-38.pyc
byte-compiling build/bdist.linux-aarch64/egg/Jetson/GPIO/gpio_pin_data.py to gpio_pin_data.cpython-38.pyc
byte-compiling build/bdist.linux-aarch64/egg/RPi/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.linux-aarch64/egg/RPi/GPIO/__init__.py to __init__.cpython-38.pyc
creating build/bdist.linux-aarch64/egg/EGG-INFO
copying lib/python/Jetson.GPIO.egg-info/PKG-INFO -> build/bdist.linux-aarch64/egg/EGG-INFO
copying lib/python/Jetson.GPIO.egg-info/SOURCES.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
copying lib/python/Jetson.GPIO.egg-info/dependency_links.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
copying lib/python/Jetson.GPIO.egg-info/top_level.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
zip_safe flag not set; analyzing archive contents...
creating 'dist/Jetson.GPIO-2.1.4-py3.8.egg' and adding 'build/bdist.linux-aarch64/egg' to it
removing 'build/bdist.linux-aarch64/egg' (and everything under it)
Processing Jetson.GPIO-2.1.4-py3.8.egg
Copying Jetson.GPIO-2.1.4-py3.8.egg to /usr/lib/python3.8/site-packages
Adding Jetson.GPIO 2.1.4 to easy-install.pth file

Installed /usr/lib/python3.8/site-packages/Jetson.GPIO-2.1.4-py3.8.egg
Processing dependencies for Jetson.GPIO==2.1.4
Finished processing dependencies for Jetson.GPIO==2.1.4
admin@ubuntu:/workspaces/isaac_ros-dev/src/custom_stuff/jetson-gpio$ cd ..
admin@ubuntu:/workspaces/isaac_ros-dev/src/custom_stuff$ cd src/
admin@ubuntu:/workspaces/isaac_ros-dev/src/custom_stuff/src$ ls
allocate_thrust.py  frskytest.py  thruster_test.py  trhuster.py
admin@ubuntu:/workspaces/isaac_ros-dev/src/custom_stuff/src$ python3 thruster_test.py 
/usr/local/lib/python3.8/dist-packages/Jetson/GPIO/gpio_event.py:182: RuntimeWarning: Event not found
  warnings.warn("Event not found", RuntimeWarning)
Initializing thruster...
^CTraceback (most recent call last):
  File "thruster_test.py", line 19, in <module>
    time.sleep(2)  # Allow 2 seconds for initialization
KeyboardInterrupt

admin@ubuntu:/workspaces/isaac_ros-dev/src/custom_stuff/src$ sudo python3 thruster_test.py 
/usr/local/lib/python3.8/dist-packages/Jetson/GPIO/gpio_event.py:182: RuntimeWarning: Event not found
  warnings.warn("Event not found", RuntimeWarning)
Initializing thruster...
Press Enter to proceed with thrust control...^CTraceback (most recent call last):
  File "thruster_test.py", line 22, in <module>
    input("Press Enter to proceed with thrust control...")
KeyboardInterrupt

admin@ubuntu:/workspaces/isaac_ros-dev/src/custom_stuff/src$ sudo udevadm control --reload-rules && sudo udevadm trigger
Running in chroot, ignoring request.
Running in chroot, ignoring request.
admin@ubuntu:/workspaces/isaac_ros-dev/src/custom_stuff/src$ python3 thruster_test.py 
/usr/local/lib/python3.8/dist-packages/Jetson/GPIO/gpio_event.py:182: RuntimeWarning: Event not found
  warnings.warn("Event not found", RuntimeWarning)
Initializing thruster...
Press Enter to proceed with thrust control...^CTraceback (most recent call last):
  File "thruster_test.py", line 22, in <module>
    input("Press Enter to proceed with thrust control...")
KeyboardInterrupt


It seems just the warning message.
Have you tried keep it running w/o doing KeyboardInterrupt?

Thank you for the help.

The issue has been resolved, though the exact solution remains unclear. We plan to redeploy our Docker solution later to determine which troubleshooting step was effective.