Use CUDA 11.5 + TensorRT 8.2?


Is it possible to use TensorRT 8.2 together with CUDA 11.5? The latest .deb package seems to indicate it’s built for CUDA 11.4:

I wonder if there will be any issues/incompatibilities if we use CUDA 11.5 instead? Are there plans to build TensorRT 8.2 against CUDA 11.5?


Hi @carlosgalvezp. TensorRT 8.2 EA was not tested with CUDA 11.5. It may work in some scenarios, but that would be an unsupported combination. The next release of TensorRT will be tested and support CUDA 11.5.

Hi, Please refer to the below links to perform inference in INT8


@ework Thanks for the quick reply! Understood, will then wait for the official release.

@NVES I don’t know if this is an automated reply, but it has nothing to do with my question :)

@ework One more thing - when trying to install TensorRT via apt, it’s pulling the wrong dependencies:

$ sudo apt-get install libnvinfer8
Get:1 Index of /compute/cuda/repos/ubuntu1804/x86_64 cuda-toolkit-11-5-config-common 11.5.50-1 [16.2 kB]
Get:2 Index of /compute/cuda/repos/ubuntu1804/x86_64 cuda-toolkit-11-config-common 11.5.50-1 [16.3 kB]
Get:3 Index of /compute/cuda/repos/ubuntu1804/x86_64 cuda-toolkit-config-common 11.5.50-1 [16.3 kB]
Get:4 Index of /compute/cuda/repos/ubuntu1804/x86_64 libcublas-11-5 [207 MB]
Get:5 Index of /compute/cuda/repos/ubuntu1804/x86_64 libcudnn8 [426 MB]
Get:6 Index of /compute/cuda/repos/ubuntu1804/x86_64 libnvinfer8 8.2.0-1+cuda11.4 [161 MB]

You can see it’s pulling CuDNN 8.3.0 and CUDA 11.5 for TensorRT 8.2. Why is that? Shouldn’t Nvidia prevent using incompatible/untested dependencies?

1 Like

@carlosgalvezp TensorRT does not pin it’s dependencies to specific versions. We follow semantic versioning for the components we depend on. If a new component release is made after TensorRT is released that is backward compatible (as determined by the component team), then it’s an acceptable upgrade in the eyes of TensorRT. In this case cuDNN 8.3.0 is backward compatible with cuDNN 8.2. Also, even though cuDNN was built using CUDA 11.5 it will still work with the CUDA versions TensorRT supports, which is CUDA 11.0 to CUDA 11.4 in this case. If you would like to prevent CUDA or cuDNN from upgrading as new releases become available then it would be preferred that you use a local repo installation of CUDA and TensorRT.

I understand this may lead to combinations of component versions that were not tested at the time of TensorRT’s release, but it also provides some flexibility in case an user’s application has other version requirements. If a new release was not backward compatible (if the cuDNN version was 9.0 for example) then it would not be allowed.

@ework Thanks for the reply. So basically “it should work” but Nvidia can’t make any guarantees about it since that particular combination has not been tested. Understood!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.