How can I install torch_tensorrt on the Orin

I’m trying to install torch_tensorrt at the Orin.

But now, I get errors.
Could you advice about it?

cat /etc/nv_tegra_release
# R35 (release), REVISION: 3.1, GCID: 32827747, BOARD: t186ref, EABI: aarch64, DATE: Sun Mar 19 15:19:21 UTC 2023

JetPack : 5.1.1

Docker image : nvcr.io/nvidia/l4t-jetpack:r35.3.1

Originally, torch_tensorrt is support until Jetpack 5.0.
I checked it by below codes.
py/setup.py

python3 setup.py bdist_wheel --jetpack-version 5.0 --use-cxx11-abi

Originally, I want to input 5.1.1, but I indicated it to 5.0 by the setup.py source code.

running bdist_wheel
using CXX11 ABI build
Jetpack version: 5.0
building libtorchtrt
Starting local Bazel server and connecting to it...
ERROR: /home/mic-733ao/d_wat/Digi_Edge_Flow_env/test/torch_tensorrt_v1.4.0/WORKSPACE:41:21: fetching new_local_repository rule //external:cuda: java.io.IOException: The repository's path is "/usr/local/cuda-11.8/" (absolute: "/usr/local/cuda-11.8") but this directory does not exist.
INFO: Repository cudnn instantiated at:
  /home/mic-733ao/d_wat/Digi_Edge_Flow_env/test/torch_tensorrt_v1.4.0/WORKSPACE:71:13: in <toplevel>
Repository rule http_archive defined at:
  /root/.cache/bazel/_bazel_root/e56405f9308dec965f173d563e26acb0/external/bazel_tools/tools/build_defs/repo/http.bzl:372:31: in <toplevel>
INFO: repository @cudnn' used the following cache hits instead of downloading the corresponding file.
 * Hash '36fff137153ef73e6ee10bfb07f4381240a86fb9fb78ce372414b528cbab2293' for https://developer.download.nvidia.com/compute/cudnn/secure/8.8.0/local_installers/11.8/cudnn-linux-x86_64-8.8.0.121_cuda11-archive.tar.xz
If the definition of 'repository @cudnn' was updated, verify that the hashes were also updated.
ERROR: /root/.cache/bazel/_bazel_root/e56405f9308dec965f173d563e26acb0/external/tensorrt/BUILD.bazel:177:11: @tensorrt//:nvinferplugin depends on @cuda//:cudart in repository @cuda which failed to fetch. no such package '@cuda//': The repository's path is "/usr/local/cuda-11.8/" (absolute: "/usr/local/cuda-11.8") but this directory does not exist.
ERROR: Analysis of target '//:libtorchtrt' failed; build aborted:
INFO: Elapsed time: 612.016s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (67 packages loaded, 475 targets configured)
    Fetching /root/.cache/bazel/_bazel_root/e56405f9308dec965f173d563e26acb0/external/cudnn; Extracting cudnn-linux-x86_64-8.8.0.121_cuda11-archive.tar.xz

In this time, /usr/local/cuda-11.8 is required.
But current under environment, it has only /usr/local/cuda-11.4.

Can I have good way to install torch_tensorrt at Orin?

Hi,

Could you share how you setup the torch_tensorrt?
Which branch are you using?

Thanks.

Thank you for replying.

I tried to use branch v1.4.0.

I follow below steps.

git clone -b v1.4.0 https://github.com/pytorch/TensorRT.git torch_tensorrt_v1.4.0
apt update
apt install build-essential openjdk-11-jdk zip unzip
cd torch_tensorrt_v1.4.0/py
python3 setup.py bdist_wheel --jetpack-version 5.0 --use-cxx11-abi

If it is not enough information to check, please let me know.

Hi,

Thanks, we are trying to reproduce this issue internally.
Will let you know the following later.

Thanks.

1 Like

The Platform Support section on the github page does not mention the ARM arch. Did someone successfully used it on a Jetson?

@francois.plessier I have a container and Dockerfile for it here:

It is using v1.4.0 on JetPack 5, and v1.0.0 on JetPack 4.

2 Likes

Nice! Thank you!