I have downloaded nv-tensorrt-local-repo-ubuntu2004-8.6.1-cuda-12.0_1.0-1_arm64.deb
But I don’t know how to translate to those xx x.x?
os="ubuntuxx04"
tag="10.x.x-cuda-x.x"
sudo dpkg -i nv-tensorrt-local-repo-${os}-${tag}_1.0-1_arm64.deb
sudo cp /var/nv-tensorrt-local-repo-${os}-${tag}/*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
My env:
Software part of jetson-stats 4.2.12 - (c) 2024, Raffaello Bonghi
Model: NVIDIA Orin Nano Developer Kit - Jetpack 5.1.4 [L4T 35.6.0]
NV Power Mode[0]: 15W
Serial Number: [XXX Show with: jetson_release -s XXX]
Hardware:
- P-Number: p3767-0005
- Module: NVIDIA Jetson Orin Nano (Developer kit)
Platform:
- Distribution: Ubuntu 20.04 focal
- Release: 5.10.216-tegra
jtop:
- Version: 4.2.12
- Service: Active
Libraries:
- CUDA: 12.3.107
- cuDNN: 8.6.0.166
- TensorRT: 8.5.2.2
- VPI: 2.4.8
- Vulkan: 1.3.204
- OpenCV: 4.9.0 - with CUDA: YES
DeepStream C/C++ SDK version: 6.3
Python Environment:
Python 3.8.10
GStreamer: YES (1.16.3)
NVIDIA CUDA: YES (ver 11.4, CUFFT CUBLAS FAST_MATH)
OpenCV version: 4.9.0 CUDA True
YOLO version: 8.3.33
PYCUDA version: 2024.1.2
Torch version: 2.5.1+l4t35.6
Torchvision version: 0.20.1a0+3ac97aa
DeepStream SDK version: 1.1.8
onnxruntime version: 1.16.3
onnxruntime-gpu version: 1.18.0
Hi,
As mentioned in the below comment, TensorRT is not upgradable on JetPack 5.
So please use the TensorRT 8.5 of the JetPack 5.1.4.
Hi,
This is not supported.
For JetPack 5, only CUDA (up to CUDA 12.2) is upgradable.
Thanks.
Thanks.
Are you sure about TensorRT 8.5 of the JetPack 5.1.4 is NOT upgradable.
As we have setHardwareCompatibilityLevel
issue that needs to upgrade TensorRT 8.5. It doesn’t have this API.
It seems onnxruntime 1.19.2 used setHardwareCompatibilityLevel,but CUDA 11.8 don’t have this API implemented.
Please tell me which version is OK to support build onnxruntime 1.19.2 on L4T 36.5 Jetpack 5.1.4?
[ 80%] Building CXX object CMakeFiles/onnx_test_runner.dir/home/daniel/Work/onnxruntime/onnxruntime/test/onnx/main.cc.o
[ 81%] Built target onnxruntime
[ 81%] Building CXX object CMakeFiles/onnxruntime_perf_test.dir/home/daniel/Work/onnxruntime/onnxruntime/test/perftest/command_args_parser…
Hi,
The latest TensorRT for JetPack 5 is 8.5.2.
Thanks.
I didn’t find 8.5.2 for jetson also. As you have claimed that ARM SBSA is NOT for jetson.
So which 8.5.2 is for jetson, it’s confusing?
Hi,
TensorRT 8.5 is included in the JetPack 5.1.4.
You can find it in the SDKmananger or below link:
https://repo.download.nvidia.com/jetson#Jetpack%205.1.4
Thanks.
Can you help to check setHardwareCompatibilityLevel
which version is implemented this API?
Currently RT8.5 don’t have this API.
Hi,
For JetPack 5, only CUDA is upgradable but up to v12.2.
From JetPack 6, CUDA/cuDNN/TensorRT are upgradable.
Upgradable packages can be found in the website link.
The default packages are included in the JetPack, which can be found in SDKmanger or our apt server.
Thanks.
1 Like
Just FYI.
The feature you are interested in is not supported on Jetson.
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#hardware-compat
By default, TensorRT engines are only compatible with the type of device where they were built. With build-time configuration, engines that are compatible with other types of devices can be built. Currently, hardware compatibility is supported only for Ampere and later device architectures and is not supported on NVIDIA DRIVE OS or JetPack.
Thanks.
1 Like
system
Closed
February 6, 2025, 7:23am
11
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.