Where to download libnvinfer.so.10 for aarch64-Linux

Description

We’re running our system on Nvidia Orin Hardware platform provided by the 3rd part vendor, which is still with the OS based on R35 build in Ubuntu 20.04. The current OS is only integrated with cuda 11.4 even without CUDNN library installed.

Recently we have the strong requirement to enable Python 3.12, but it has problem to enable torch with CUDA 11.4. I built Python 3.12 from source code, and also upgraded the CUDA 12.2, CUDNN library is also installed. Now I can successfully build Torch 2.2.0 successfully from source code with CUDA and CUDNN enabled. Also I can use llama.cpp to run LLM successfully on GPU now.

Besides this, I already built tensorRT from source code, but our system only has compatible version libnvinfer.so.8, so tensorRT 8.6 version is the version available for our system. If we want to use higher version tensorRT to get more features, like 10.0, we have to upgrade libnvinfer.so.8 to libnvinfer.so.10 while it can’t be built out from source code.

I checked the available builds from Nvidia Download Sites, but looks like they’re only for ARM SBSA version. Is it possible to download libnvinfer.so.10 for aarch64-linux platform?

Environment

TensorRT Version: 8.6
GPU Type: Nvidia Orin
Nvidia Driver Version:
CUDA Version: 12.2
CUDNN Version:
Operating System + Version: Ubuntu 20.04
Python Version (if applicable): Python 3.12
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 2.2
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi @xiaofeng.lei.bj ,
Can you please reach out to Jetson forum for better assistance.

Thanks