Issue with setting up Triton on Jetson Nano

I am trying to setup an Isaac ROS DNN Inference using the github: GitHub - NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference: Hardware-accelerated DNN model inference ROS2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU.

While trying to setup the docker container for the Triton server, we are getting an error while running the command:

cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common &&
./scripts/run_dev.sh

But we get the terminal output:
isaac_ros_dev not specified, assuming /home/nvidia/workspaces/isaac_ros-dev
Error: Failed to call git rev-parse --git-dir --show-toplevel: “fatal: not a git repository (or any of the parent directories): .git\n”
Building aarch64.humble.nav2.user base as image: isaac_ros_dev-aarch64 using key aarch64.humble.nav2.user
Using base image name not specified, using ‘’
Using docker context dir not specified, using Dockerfile directory
Resolved the following Dockerfiles for target image: aarch64.humble.nav2.user
/home/nvidia/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/…/docker/Dockerfile.user
/home/nvidia/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/…/docker/Dockerfile.aarch64.humble.nav2
Building /home/nvidia/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/…/docker/Dockerfile.aarch64.humble.nav2 as image: aarch64-humble-nav2-image with base:
Sending build context to Docker daemon 80.9kB
Step 1/1 : FROM nvcr. io/nvidia/isaac/ros:aarch64-humble-nav2_6dfaf7adbe190f1181c3a0a2f2418760
—> d8c1024fc418
[Warning] One or more build-args [USERNAME USER_GID USER_UID] were not consumed
Successfully built d8c1024fc418
Successfully tagged aarch64-humble-nav2-image:latest
Building /home/nvidia/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/…/docker/Dockerfile.user as image: isaac_ros_dev-aarch64 with base: aarch64-humble-nav2-image
Sending build context to Docker daemon 80.9kB
Step 1/17 : ARG BASE_IMAGE
Step 2/17 : FROM ${BASE_IMAGE}
—> d8c1024fc418
Step 3/17 : ARG USERNAME=admin
—> Using cache
—> af02596d177e
Step 4/17 : ARG USER_UID=1000
—> Using cache
—> c56c6ad95c14
Step 5/17 : ARG USER_GID=1000
—> Using cache
—> d8d544755b14
Step 6/17 : RUN apt-get update && apt-get install -y sudo && rm -rf /var/lib/apt/lists/* && apt-get clean
—> Running in 1b9c47450a32
failed to create shim task: OCI runtime create failed: failed to create NVIDIA Container Runtime: failed to construct OCI spec modifier: failed to construct discoverer: failed to create Xorg discoverer: failed to locate libcuda.so: pattern libcuda.so...* not found: unknown
Failed to build base image: isaac_ros_dev-aarch64, aborting.
~/workspaces/isaac_ros-dev/src/isaac_ros_common

The device is a Jetson Nano with 5.1.1 installed. We’ve followed all the instructions till this point including updating our /etc/docker/daemon.json. We’ve also been unsucessful in locating the lubcuda.so, not sure if there are any easy solutions to this issue that we are missing.

Based purely on timing of the post and the error message (failed to locate libcuda…) I suspect I was hitting the same issue today. My system was working last week. Rebooting system didn’t help. Turned out there was some recent upgrades to nvidia packages. I got it running again by simply downgrading:
sudo apt install nvidia-docker2=2.12.0-1 nvidia-container-toolkit=1.12.1-1 nvidia-container-runtime=3.12.0-1 libnvidia-container-tools=1.12.1-1 nvidia-container-toolkit-base=1.12.1-1
I didn’t analyze the problem yet further.

1 Like

Hi,

Just want to confirm first.

Do you use Orin Nano (Ampere GPU) or Jetson Nano (Maxwell GPU)
JetPack 5 doesn’t support the Maxwell Nano.

Thanks.

I’m not wanting to interfere here too much, but just stating I was hitting possibly the same error with actually Xavier and what I think is JetPack 5.0.2 (nvidia-l4t-core 35.1.0-20220825113828).

The downgrade steps worked for me. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.