If CUDA 11.4.19 is installed, is TensorRT 8.4 required?

I have a Jetson AGX Orin with 64GB. I used JetPacK 5.1(-b147), and it installed:

CUDA 11.4.19

Separately I installed, onnxruntime_gpu version 1.12.1. The intention is to deploy an ONNX model and speed up inference with TensorRT.

However, the ONNX Runtime NVIDIA TensorRT page indicates TensorRT version 8.4 is required.

Three questions:

  1. Is TensorRT 8.4 required?
  2. Should I remove my current TensorRT version, before installing vers. 8.4 from a Debian local repo?
  3. Is it better to upgrade CUDA to version 11.6, and use TensorRT


Please use the default JetPack setting and install the package that supports the Jetson environment.
You can find the ONNXRuntime package for the Jetson below:


Your assistance is appreciated. Thank you.

Please confirm, is it safe to the ignore the ONNX Runtime NVIDIA TensorRT page requirements?
I ask because here is a snippet that page:

Also, just to be clear, I initially installed ONNXRuntime (JetPack 5.1.1, Python 3.8, onnxruntime 1.15.1) from the page you referred to and ran into runtime errors (hence my post, to rule out package version incompatibilities):


Does the package on ONNX page also contain the Jetson support (aarch64 package)?
Usually, it only has the desktop package.

What kind of errors do you meet when using the package from Model Zoo?
The package is built with the same JetPack version so it should be compatible.


Thanks again for your questions and feedback.

With regards to your first question, I do not believe the ONNX Runtime NVIDIA TensorRT page contain the Jetson support package. They indicate:

  • Please select the GPU (CUDA/TensorRT) version of OnnxRuntime: Install ONNX Runtime | onnxruntime. Pre-built packages and Docker images are available for Jetpack in the Jetson Zoo.

  • Supports: JetPack >= 4.4 (Jetson Nano / TX1 / TX2 / Xavier NX / AGX Xavier / AGX Orin)

With regards to your second question, first I get a warning (perhaps not surprising):
2023-08-14 07:22:00.737485911 [W:onnxruntime:Default, tensorrt_execution_provider.h:60 log] [2023-08-14 13:22:00 WARNING] external/onnx-tensorrt/onnx2trt_utils.cpp:367: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.

Then a second warning that repeats, and does not stop, till the process is killed:
2023-08-14 07:22:33.232087749 [W:onnxruntime:Default, tensorrt_execution_provider.h:60 log] [2023-08-14 13:22:33 WARNING] Unknown embedded device detected. Using 59656MiB as the allocation cap for memory on embedded devices.

Thanks in advance for your guidance.


Sorry for the late update.

The warnings are harmless and you should be able to use it without an issue.

The first one indicates a casting issue since TensorRT doesn’t support INT64.
The second one is specified to Orin 64GB and will be fixed in the JetPack 6/TensorRT 8.6.
For JetPack 5/TensorRT 8.5, TensorRT will use a fallback path instead and allocate 95% of the total memory.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.