TensorRT for Cuda 12.2

Description

Hey everyone! I have a fresh install of ubuntu 22.04 with Cuda 12.2. I installed Cuda Toolkit and Cudnn. After that I was able to use GPU for pytorch model training. Now I need to install TensorRT and I can’t find a version for Cuda 12.2. Version for 12.1 doesn’t work. What should I do?

Environment

TensorRT Version: * TensorRT 8.6 GA for Ubuntu 22.04 and CUDA 12.0 and 12.1 DEB local repo Package
GPU Type: Nvidia 3060 (12gb)
Nvidia Driver Version: 535.113.01
CUDA Version: CUDA Version: 12.2
CUDNN Version: 8.9.0.5
Operating System + Version: Ubuntu 22.04.3 LTS
Python Version (if applicable): Python 3.11.4
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 2.2.0.dev20231010
Baremetal or Container (if container which image + tag): Baremetal

Steps To Reproduce

I get E: Unable to locate package tensorrt

I reinstalled everything with Cuda 12.1. I still can train pytorch models on cuda, but I get exactly the same error while trying to install tensorrt. I took instructions from off web-site

Hi @Argo_sa ,
Are you following the official document for Tensorrt Setup?

Thanks

Dear AakankshaS:

What’s your direct suggestion on Tensorrt 8.6.1 for the latest RTX 4060 Ti 16G released on July 2023?

While installing Driver for the GPU. it matches either v535.98(roll back to v535.86.10 after installing CUDA Toolkit) or v535.86.10 that is compatible with CUDA Version 12.2.

If trying to install lower version driver v530.41.03 (released on March 2023) for RTX 4060 Ti 16G, nvidia-smi shows ERR! on GPU Name, i.e, that is no GPU Name of GeForce RTX 4060 Ti in the nvidia-smi interface.

So what I can install is Linux Driver v535.86.10 and defaults CUDA Toolkit 12.2 update 1. It seems that the CUDA Toolkit is not compatible with the latest Tensorrt 8.6.1.

What I can do to install Tensorrt? Appreciate you give a direct reply.

Thanks in advance,

Mike

Hi,

You can also try TensorRT NGC container to avoid setup related issues.

Thank you.

Exactly, this part:

sudo apt-get install tensorrt
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package tensorrt

I am getting the same thing starting over again. I can see that for some reason your instructions do not lead nv-tensorrt-local-repo-ubuntu2204-8.6.1-cuda-12.0 to get into /etc/apt/sources.list.d/. Only when I run command echo "deb [arch=amd64] file:///var/nv-tensorrt-local-repo-ubuntu2204-8.6.1-cuda-12.0 /" | sudo tee /etc/apt/sources.list.d/tensorrt.list it gets there. And After that I am heading a new problem:

sudo apt-get install tensorrt
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 libnvinfer-dev : Depends: libcudnn8-dev but it is not installable
 libnvinfer8 : Depends: libcudnn8 but it is not installable
E: Unable to correct problems, you have held broken packages.

What is missing?

cat /usr/local/cuda/include/cudnn_version.h | grep CUDNN_MAJOR -A 2
#define CUDNN_MAJOR 8
#define CUDNN_MINOR 9
#define CUDNN_PATCHLEVEL 5
--
#define CUDNN_VERSION (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)

/* cannot use constexpr here since this is a C-only file */

Seems like I have everything needed installed.

Driver Version: 530.30.02
CUDA Version: 12.1
Python 3.11.4
CUDNN 8.9.5
Triton Inference Server 23.07

Some more details:

nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Feb__7_19:32:13_PST_2023
Cuda compilation tools, release 12.1, V12.1.66
Build cuda_12.1.r12.1/compiler.32415258_0

dpkg -l | grep nvinfer is empty

And TensorRT Python API is installed:

python
Python 3.11.4 (main, Jul  5 2023, 13:45:01) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorrt
>>> print(tensorrt.__version__)
8.6.1
>>> assert tensorrt.Builder(tensorrt.Logger())

Hi Guys:

Nvidia has finally released TensorRT 10 EA (early Access) version.

In spite of Nvdia’s delayed support for the compatibility between TensorRt and CUDA Toolkit(or cuDNN) for almost six months, the new release of TensorRT supports CUDA 12.2 to 12.4. It can solve the previous trouble-making/time-consuming issues. Users need to log in the following TensorRT webpage to download.

Cheers.

1 Like

How about installing deb files related to tensorrt, ignoring the dependencies as shown below?

sudo dpkg --force-depends -i /var/nv-tensorrt-local-repo-ubuntu2204-8.6.1-cuda-12.0/*.deb

Hi, I am using CUDA 12.2 but TensorRT 10.2 does not support (I get serialize error when converting and loading), any recs?

1 Like