Hello NVIDIA Team and Community,
I’m encountering persistent issues with missing SDK components after what appears to be a successful JetPack 6.2 (L4T R36.4.3) component installation on my Jetson AGX Orin 64GB Developer Kit. I’m using SDK Manager (v2.3.0.12617) running via the official Docker image (sdkmanager:2.3.0.12617-Ubuntu_22.04
) on an Ubuntu 24.04 host machine.
System Details:
- Jetson Device: NVIDIA Jetson AGX Orin 64GB Developer Kit
- JetPack Version (as reported by
jtop
): 6.2 - L4T Version (as reported by
jtop
): 36.4.3 - Host PC OS for SDK Manager: Ubuntu 24.04 LTS
- SDK Manager Version: 2.3.0.12617 (running via official Docker image
sdkmanager:2.3.0.12617-Ubuntu_22.04
) - Jetson User for SDKM Target:
cyberhope
Issue Summary:
Despite SDK Manager CLI (via Docker, using its interactive query to select components and then providing target credentials) reporting “INSTALLATION COMPLETED SUCCESSFULLY” for target components (with most items showing “Up-to-date” and a few “Installed”), crucial libraries appear to be missing or misconfigured on the Jetson:
libtritonserver.so
is missing: This prevents thelibnvdsgst_inferserver.so
GStreamer plugin from loading.- OpenCV lacks CUDA support:
jtop
reports “OpenCV: 4.8.0 with CUDA: NO”. - As a consequence, I’m also unable to get DeepStream sample applications (like
deepstream-test1-app
) to successfully serialize and save their TensorRT engine files, even when targeting a user-writable directory like/home/cyberhope/
.
Relevant jtop
Output from Jetson:
Platform
Machine: aarch64
System: Linux
Distribution: Ubuntu 22.04 Jammy Jellyfish
Release: 5.15.148-tegra
Python: 3.10.12
Libraries
CUDA: 12.6.68
cuDNN: 9.3.0.75
TensorRT: 10.3.0.30
VPI: 3.2.4
Vulkan: 1.3.204
OpenCV: 4.8.0 with CUDA: NO <--- Problem
Hardware
Model: NVIDIA Jetson AGX Orin Developer Kit
P-Number: p3701-0005
Module: NVIDIA Jetson AGX Orin (64GB ram)
SoC: tegra234
CUDA Arch BIN: 8.7
L4T: 36.4.3
Jetpack: 6.2
Verification Steps & Outputs from Jetson (after SDKM component install & reboot):
-
cat /etc/nv_tegra_release
:# R36 (release), REVISION: 4.3, GCID: 38968081, BOARD: generic, EABI: aarch64, DATE: Wed Jan 8 01:49:37 UTC 2025 # KERNEL_VARIANT: oot TARGET_USERSPACE_LIB_DIR=nvidia TARGET_USERSPACE_LIB_DIR_PATH=usr/lib/aarch64-linux-gnu/nvidia
(Note: The “BOARD: generic” and future DATE persist despite OS re-flash and subsequent component installs, but
jtop
correctly identifies the L4T/JetPack version. System clock is confirmed correct now.) -
nvcc --version
:nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2024 NVIDIA Corporation Built on Wed_Aug_14_10:14:07_PDT_2024 Cuda compilation tools, release 12.6, V12.6.68 Build cuda_12.6.r12.6/compiler.34714021_0
(This matches JetPack 6.2 expectations).
-
ls -l /usr/local/cuda
:lrwxrwxrwx 1 root root 20 Jun 18 10:43 /usr/local/cuda -> /usr/local/cuda-12.6
(Symlink seems correct).
-
sudo find /usr /opt -name "libtritonserver.so"
:
(No output - file not found). -
rm -rf ~/.cache/gstreamer-1.0 ; gst-inspect-1.0 nvinfer
:(gst-plugin-scanner:PID): GStreamer-WARNING **: HH:MM:SS.MS: Failed to load plugin '/opt/nvidia/deepstream/deepstream-7.1/lib/gst-plugins/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory (gst-plugin-scanner:PID): GStreamer-WARNING **: HH:MM:SS.MS: Failed to load plugin '/opt/nvidia/deepstream/deepstream-7.1/lib/gst-plugins/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory (gst-plugin-scanner:PID): GStreamer-WARNING **: HH:MM:SS.MS: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory (gst-plugin-scanner:PID): GStreamer-WARNING **: HH:MM:SS.MS: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory ... (nvdsgst_infer plugin details load successfully) ...
SDK Manager CLI Interaction Summary:
I used the SDK Manager Docker image’s --query interactive
mode to select components. The OS flash was deselected, and all Jetson Runtime and Jetson SDK components (including DeepStream 7.1, CUDA-X AI, etc.) were selected. After these selections, SDKM TUI prompted for target IP, username (cyberhope
), and password, which were provided. The final summary screen from SDKM was:
===== INSTALLATION COMPLETED SUCCESSFULLY. =====
- DateTime Target Setup: Installed
- Jetson Platform Services: Installed
(Other 19 components like CUDA Runtime, cuDNN Runtime, TensorRT Runtime, OpenCV Runtime, DeepStream, etc., reported as "Up-to-date")
===== Installation completed successfully - Total 21 components =====
===== 2 succeeded, 0 failed, 19 up-to-date, 0 skipped =====
Key Questions:
- Why is
libtritonserver.so
not being installed by SDK Manager as part of the JetPack 6.2 “CUDA-X AI” or “DeepStream SDK” components for AGX Orin? - Why is OpenCV (version 4.8.0 as per
jtop
) installed by SDK Manager not built with CUDA support, resulting injtop
showing “CUDA: NO”? - Could the “BOARD: generic” and unusual future date in
/etc/nv_tegra_release
(despitejtop
showing correct L4T/JetPack versions) be indicative of an issue with the base OS image that might be preventing proper component installation? - What is the recommended procedure or specific CLI arguments to ensure a complete installation of all JetPack 6.2 target software components, including Triton client libraries and a CUDA-enabled OpenCV, using SDK Manager CLI in Docker?
- Could the host OS being Ubuntu 24.04 (though SDKM is Dockerized with an Ubuntu 22.04 base and
qemu-user-static
/binfmt-support
are installed on host) be a factor?
Troubleshooting Already Attempted:
- Ensured host prerequisites for Docker SDKM (
qemu-user-static
,binfmt-support
) are met. - Cleared SDKM cache on the host (by removing and letting Docker SDKM recreate
~/.nvsdkm
mapped volume). - Used SDKM’s interactive query (
--query interactive
) to make selections, then let the TUI proceed to target connection details. - Confirmed correct target credentials (
cyberhope
user) are used by SDKM during the TUI phase.
My primary goal is to get a stable Jetson AGX Orin environment running JetPack 6.2 with DeepStream 7.1, which includes successful TensorRT engine generation/persistence and full functionality of the AI stack. The inability to save engine files was the initial symptom that led to discovering these missing/misconfigured components.
SDK Manager logs from the host (from the Docker volume mapped to /home/nvidia/.nvsdkm/logs
inside the container) can be provided if specific log files are most useful.
Any guidance would be greatly appreciated.
Thank you!
Rick Barretto