How can I install torch_tensorrt on the Orin

I’m trying to install torch_tensorrt at the Orin.

But now, I get errors.
Could you advice about it?

cat /etc/nv_tegra_release
# R35 (release), REVISION: 3.1, GCID: 32827747, BOARD: t186ref, EABI: aarch64, DATE: Sun Mar 19 15:19:21 UTC 2023

JetPack : 5.1.1

Docker image :

Originally, torch_tensorrt is support until Jetpack 5.0.
I checked it by below codes.

python3 bdist_wheel --jetpack-version 5.0 --use-cxx11-abi

Originally, I want to input 5.1.1, but I indicated it to 5.0 by the source code.

running bdist_wheel
using CXX11 ABI build
Jetpack version: 5.0
building libtorchtrt
Starting local Bazel server and connecting to it...
ERROR: /home/mic-733ao/d_wat/Digi_Edge_Flow_env/test/torch_tensorrt_v1.4.0/WORKSPACE:41:21: fetching new_local_repository rule //external:cuda: The repository's path is "/usr/local/cuda-11.8/" (absolute: "/usr/local/cuda-11.8") but this directory does not exist.
INFO: Repository cudnn instantiated at:
  /home/mic-733ao/d_wat/Digi_Edge_Flow_env/test/torch_tensorrt_v1.4.0/WORKSPACE:71:13: in <toplevel>
Repository rule http_archive defined at:
  /root/.cache/bazel/_bazel_root/e56405f9308dec965f173d563e26acb0/external/bazel_tools/tools/build_defs/repo/http.bzl:372:31: in <toplevel>
INFO: repository @cudnn' used the following cache hits instead of downloading the corresponding file.
 * Hash '36fff137153ef73e6ee10bfb07f4381240a86fb9fb78ce372414b528cbab2293' for
If the definition of 'repository @cudnn' was updated, verify that the hashes were also updated.
ERROR: /root/.cache/bazel/_bazel_root/e56405f9308dec965f173d563e26acb0/external/tensorrt/BUILD.bazel:177:11: @tensorrt//:nvinferplugin depends on @cuda//:cudart in repository @cuda which failed to fetch. no such package '@cuda//': The repository's path is "/usr/local/cuda-11.8/" (absolute: "/usr/local/cuda-11.8") but this directory does not exist.
ERROR: Analysis of target '//:libtorchtrt' failed; build aborted:
INFO: Elapsed time: 612.016s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (67 packages loaded, 475 targets configured)
    Fetching /root/.cache/bazel/_bazel_root/e56405f9308dec965f173d563e26acb0/external/cudnn; Extracting cudnn-linux-x86_64-

In this time, /usr/local/cuda-11.8 is required.
But current under environment, it has only /usr/local/cuda-11.4.

Can I have good way to install torch_tensorrt at Orin?


Could you share how you setup the torch_tensorrt?
Which branch are you using?


Thank you for replying.

I tried to use branch v1.4.0.

I follow below steps.

git clone -b v1.4.0 torch_tensorrt_v1.4.0
apt update
apt install build-essential openjdk-11-jdk zip unzip
cd torch_tensorrt_v1.4.0/py
python3 bdist_wheel --jetpack-version 5.0 --use-cxx11-abi

If it is not enough information to check, please let me know.


Thanks, we are trying to reproduce this issue internally.
Will let you know the following later.


1 Like

The Platform Support section on the github page does not mention the ARM arch. Did someone successfully used it on a Jetson?

@francois.plessier I have a container and Dockerfile for it here:

It is using v1.4.0 on JetPack 5, and v1.0.0 on JetPack 4.


Nice! Thank you!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.