JetPack 6.2 (L4T R36.4.3) on AGX Orin - Missing libtritonserver.so & OpenCV CUDA Support After SDK Manager Component Install

Hello NVIDIA Team and Community,

I’m encountering persistent issues with missing SDK components after what appears to be a successful JetPack 6.2 (L4T R36.4.3) component installation on my Jetson AGX Orin 64GB Developer Kit. I’m using SDK Manager (v2.3.0.12617) running via the official Docker image (sdkmanager:2.3.0.12617-Ubuntu_22.04) on an Ubuntu 24.04 host machine.

System Details:

  • Jetson Device: NVIDIA Jetson AGX Orin 64GB Developer Kit
  • JetPack Version (as reported by jtop): 6.2
  • L4T Version (as reported by jtop): 36.4.3
  • Host PC OS for SDK Manager: Ubuntu 24.04 LTS
  • SDK Manager Version: 2.3.0.12617 (running via official Docker image sdkmanager:2.3.0.12617-Ubuntu_22.04)
  • Jetson User for SDKM Target: cyberhope

Issue Summary:

Despite SDK Manager CLI (via Docker, using its interactive query to select components and then providing target credentials) reporting “INSTALLATION COMPLETED SUCCESSFULLY” for target components (with most items showing “Up-to-date” and a few “Installed”), crucial libraries appear to be missing or misconfigured on the Jetson:

  1. libtritonserver.so is missing: This prevents the libnvdsgst_inferserver.so GStreamer plugin from loading.
  2. OpenCV lacks CUDA support: jtop reports “OpenCV: 4.8.0 with CUDA: NO”.
  3. As a consequence, I’m also unable to get DeepStream sample applications (like deepstream-test1-app) to successfully serialize and save their TensorRT engine files, even when targeting a user-writable directory like /home/cyberhope/.

Relevant jtop Output from Jetson:

Platform
Machine: aarch64
System: Linux
Distribution: Ubuntu 22.04 Jammy Jellyfish
Release: 5.15.148-tegra
Python: 3.10.12

Libraries
CUDA: 12.6.68
cuDNN: 9.3.0.75
TensorRT: 10.3.0.30
VPI: 3.2.4
Vulkan: 1.3.204
OpenCV: 4.8.0 with CUDA: NO  <--- Problem

Hardware
Model: NVIDIA Jetson AGX Orin Developer Kit
P-Number: p3701-0005
Module: NVIDIA Jetson AGX Orin (64GB ram)
SoC: tegra234
CUDA Arch BIN: 8.7
L4T: 36.4.3
Jetpack: 6.2

Verification Steps & Outputs from Jetson (after SDKM component install & reboot):

  1. cat /etc/nv_tegra_release:

    # R36 (release), REVISION: 4.3, GCID: 38968081, BOARD: generic, EABI: aarch64, DATE: Wed Jan  8 01:49:37 UTC 2025 
    # KERNEL_VARIANT: oot
    TARGET_USERSPACE_LIB_DIR=nvidia
    TARGET_USERSPACE_LIB_DIR_PATH=usr/lib/aarch64-linux-gnu/nvidia
    

    (Note: The “BOARD: generic” and future DATE persist despite OS re-flash and subsequent component installs, but jtop correctly identifies the L4T/JetPack version. System clock is confirmed correct now.)

  2. nvcc --version:

    nvcc: NVIDIA (R) Cuda compiler driver
    Copyright (c) 2005-2024 NVIDIA Corporation
    Built on Wed_Aug_14_10:14:07_PDT_2024
    Cuda compilation tools, release 12.6, V12.6.68
    Build cuda_12.6.r12.6/compiler.34714021_0
    

    (This matches JetPack 6.2 expectations).

  3. ls -l /usr/local/cuda:

    lrwxrwxrwx 1 root root 20 Jun 18 10:43 /usr/local/cuda -> /usr/local/cuda-12.6
    

    (Symlink seems correct).

  4. sudo find /usr /opt -name "libtritonserver.so":
    (No output - file not found).

  5. rm -rf ~/.cache/gstreamer-1.0 ; gst-inspect-1.0 nvinfer:

    (gst-plugin-scanner:PID): GStreamer-WARNING **: HH:MM:SS.MS: Failed to load plugin '/opt/nvidia/deepstream/deepstream-7.1/lib/gst-plugins/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory
    (gst-plugin-scanner:PID): GStreamer-WARNING **: HH:MM:SS.MS: Failed to load plugin '/opt/nvidia/deepstream/deepstream-7.1/lib/gst-plugins/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory
    (gst-plugin-scanner:PID): GStreamer-WARNING **: HH:MM:SS.MS: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory
    (gst-plugin-scanner:PID): GStreamer-WARNING **: HH:MM:SS.MS: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory
    ... (nvdsgst_infer plugin details load successfully) ...
    

SDK Manager CLI Interaction Summary:
I used the SDK Manager Docker image’s --query interactive mode to select components. The OS flash was deselected, and all Jetson Runtime and Jetson SDK components (including DeepStream 7.1, CUDA-X AI, etc.) were selected. After these selections, SDKM TUI prompted for target IP, username (cyberhope), and password, which were provided. The final summary screen from SDKM was:

===== INSTALLATION COMPLETED SUCCESSFULLY. =====
- DateTime Target Setup: Installed
- Jetson Platform Services: Installed
(Other 19 components like CUDA Runtime, cuDNN Runtime, TensorRT Runtime, OpenCV Runtime, DeepStream, etc., reported as "Up-to-date")
===== Installation completed successfully - Total 21 components =====
===== 2 succeeded, 0 failed, 19 up-to-date, 0 skipped =====

Key Questions:

  1. Why is libtritonserver.so not being installed by SDK Manager as part of the JetPack 6.2 “CUDA-X AI” or “DeepStream SDK” components for AGX Orin?
  2. Why is OpenCV (version 4.8.0 as per jtop) installed by SDK Manager not built with CUDA support, resulting in jtop showing “CUDA: NO”?
  3. Could the “BOARD: generic” and unusual future date in /etc/nv_tegra_release (despite jtop showing correct L4T/JetPack versions) be indicative of an issue with the base OS image that might be preventing proper component installation?
  4. What is the recommended procedure or specific CLI arguments to ensure a complete installation of all JetPack 6.2 target software components, including Triton client libraries and a CUDA-enabled OpenCV, using SDK Manager CLI in Docker?
  5. Could the host OS being Ubuntu 24.04 (though SDKM is Dockerized with an Ubuntu 22.04 base and qemu-user-static/binfmt-support are installed on host) be a factor?

Troubleshooting Already Attempted:

  • Ensured host prerequisites for Docker SDKM (qemu-user-static, binfmt-support) are met.
  • Cleared SDKM cache on the host (by removing and letting Docker SDKM recreate ~/.nvsdkm mapped volume).
  • Used SDKM’s interactive query (--query interactive) to make selections, then let the TUI proceed to target connection details.
  • Confirmed correct target credentials (cyberhope user) are used by SDKM during the TUI phase.

My primary goal is to get a stable Jetson AGX Orin environment running JetPack 6.2 with DeepStream 7.1, which includes successful TensorRT engine generation/persistence and full functionality of the AI stack. The inability to save engine files was the initial symptom that led to discovering these missing/misconfigured components.

SDK Manager logs from the host (from the Docker volume mapped to /home/nvidia/.nvsdkm/logs inside the container) can be provided if specific log files are most useful.

Any guidance would be greatly appreciated.

Thank you!
Rick Barretto

Hi,

1.
Please find below for the details to deploy Triton server with Deepstream:

2.
The default OpenCV package doesn’t build with CUDA support.
But you can find one below:

3
Board generic is expected.
r36.4.3 is released this year for the super mode so the date (2025 Jan) is also expected.

We recommend trying Triton with the container so you don’t need to take care of the compatibility issue.
If you want to use it with Deepstream, please try below with Deepstream + Triton support:

Ex. nvcr.io/nvidia/deepstream:7.1-triton-multiarch

If you only want to use the Triton server, you can find a monthly release below:

Ex. nvcr.io/nvidia/tritonserver:25.05-py3-igpu-sdk

Thanks

1 Like

Thank you i will explore these options first to develop our application. We will be using multiple caneras that we will want all working simultaneously on Jetson AGX so the demo projects will be a perfect starting point.

Hi,

For multi-camera use cases, it’s recommended to try Deepstream.
Since it provides some tools for multi-stream which should help.

Thanks.

ok thank you - so for the container to run best on Jetson AGX 64GB Developer kit you suggest: deepstream:7.1-samples-multiarch: The DeepStream samples container extends the base container to also include sample applications that are included in the DeepStream SDK along with associated config files, models, and streams. This container is ideal to understand and explore the DeepStream SDK using the provided samples.

Thank you again for your guidance regarding getting CUDA-enabled OpenCV on my Jetson AGX Orin (JetPack 6.0, L4T R36.4.3, CUDA 12.6). I’ve implemented your advice and wanted to provide a detailed update on the process, including some challenges encountered and our final verification.

1. Initial Problem:
My jtop utility indicated OpenCV: 4.8.0 with CUDA: NO, despite other JetPack components being correctly installed. My goal was to ensure OpenCV’s Python bindings were CUDA-accelerated.

2. Your Initial Advice:
You advised using the pypi.jetson-ai-lab.dev/jp6/cu126/+simple/ index for pip installation, specifically recommending:
pip install opencv-python --index-url jp6/cu126: simple list (including inherited indices)

3. The Installation Journey & Challenges:

We repeatedly ran the suggested pip command (sudo pip3 install --index-url jp6/cu126: simple list (including inherited indices) opencv-python). Each time, pip reported “Successfully installed”. The key package downloaded and installed by this process was consistently opencv_contrib_python-4.11.0.86-cp310-cp310-linux_aarch64.whl.

However, initial verification attempts were misleading or failed, leading to further troubleshooting:

  • Initial jtop Misleading: jtop continued to show OpenCV: 4.8.0 with CUDA: NO.

  • ImportError: libcblas.so.3 / libtesseract.so.4: When attempting to verify with python3 -c “import cv2; print(cv2.getBuildInformation())” | grep -i cuda, we initially encountered ImportError for libcblas.so.3 and later for libtesseract.so.4.

    • Root Cause: These were external system dependencies inadvertently removed by an earlier, overly broad sudo apt autoremove libopencv* command.

    • Resolution: Reinstalling libblas3, liblapack3, libatlas-base-dev, and libtesseract4 (which pulled liblept5) resolved these ImportError issues.

4. The Definitive Confirmation of Success:

After ensuring all underlying system dependencies were in place, repeated execution of the Python verification command consistently yielded the desired result:

Generated bash

cyberhope@jetsonagx:~$ python3 -c "import cv2; print(cv2.getBuildInformation())" | grep -i cuda
    Extra dependencies:          /usr/lib/aarch64-linux-gnu/liblapack.so /usr/lib/aarch64-linux-gnu/libcblas.so /usr/lib/aarch64-linux-gnu/libatlas.so /usr/lib/aarch64-linux-gnu/libjpeg.so /usr/lib/aarch64-linux-gnu/libpng.so /usr/lib/aarch64-linux-gnu/libz.so Iconv::Iconv m pthread cudart_static dl rt nppc nppial nppicc nppidei nppif nppig nppim nppist nppisu nppitc npps cublas cudnn cufft -L/usr/local/cuda/lib64 -L/usr/lib/aarch64-linux-gnu
    To be built:                 alphamat aruco bgsegm bioinspired calib3d ccalib core cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev datasets dnn dnn_objdetect dnn_superres dpm face features2d flann fuzzy gapi hfs highgui img_hash imgcodecs imgproc intensity_transform line_descriptor mcc ml objdetect optflow phase_unwrapping photo plot python3 quality rapid reg rgbd saliency shape signal stereo stitching structured_light superres surface_matching text tracking video videoio videostab wechat_qrcode xfeatures2d ximgproc xobjdetect xphoto
  NVIDIA CUDA:                   YES (ver 12.6, CUFFT CUBLAS FAST_MATH)

content_copydownload

Use code with caution.Bash

This output unequivocally confirms that:

  • OpenCV (specifically, the opencv-contrib-python-4.11.0.86 package installed via pip) is now correctly configured to use CUDA.

  • It successfully detects and links to CUDA 12.6, indicating proper setup of CUDA paths and libraries.

  • The To be built section further confirms that CUDA-accelerated modules within OpenCV are enabled.

5. The jtop Discrepancy (Current State):

Despite the definitive Python confirmation, jtop still shows OpenCV: MISSING. (Previously it showed NO, and after a pip uninstall and jtop re-install/reboot, it changed to MISSING).

This suggests jtop might be querying the system’s apt-installed C++ OpenCV (which may or may not be fully removed/re-detected, or whose Python bindings might not be configured for CUDA out-of-the-box), rather than directly inspecting the pip-installed Python bindings.

6. Conclusion & Verification Request:

Based on the consistent python3 -c “import cv2; print(cv2.getBuildInformation())” | grep -i cuda" output, we are confident that the Jetson now has CUDA-enabled OpenCV (version 4.11.0.86) working correctly for Python development.

Could you please confirm if this final state, as shown by the python3 command’s output, is indeed the expected and fully functional setup for CUDA-enabled OpenCV on JetPack 6.0? Any insights into why jtop might be reporting “MISSING” while the Python environment is functional would also be greatly appreciated.

This detailed breakdown might be useful for others encountering similar issues, and we’d be happy to contribute it to community resources.

Thank you again for your invaluable assistance.

Hi,

Yes, the OpenCV in your environment should have CUDA support now.
jtop check CUDA support with opencv_version --verbose | grep "NVIDIA CUDA" command directly.

In our environment which is built OpenCV from the source with jetson-container, we do see the log present:

opencv_version --verbose | grep "NVIDIA CUDA"
  NVIDIA CUDA:                   YES (ver 12.6, CUFFT CUBLAS FAST_MATH)

Thanks.