Hi @carlosgalvezp. TensorRT 8.2 EA was not tested with CUDA 11.5. It may work in some scenarios, but that would be an unsupported combination. The next release of TensorRT will be tested and support CUDA 11.5.
@carlosgalvezp TensorRT does not pin it’s dependencies to specific versions. We follow semantic versioning for the components we depend on. If a new component release is made after TensorRT is released that is backward compatible (as determined by the component team), then it’s an acceptable upgrade in the eyes of TensorRT. In this case cuDNN 8.3.0 is backward compatible with cuDNN 8.2. Also, even though cuDNN was built using CUDA 11.5 it will still work with the CUDA versions TensorRT supports, which is CUDA 11.0 to CUDA 11.4 in this case. If you would like to prevent CUDA or cuDNN from upgrading as new releases become available then it would be preferred that you use a local repo installation of CUDA and TensorRT.
I understand this may lead to combinations of component versions that were not tested at the time of TensorRT’s release, but it also provides some flexibility in case an user’s application has other version requirements. If a new release was not backward compatible (if the cuDNN version was 9.0 for example) then it would not be allowed.
@ework Thanks for the reply. So basically “it should work” but Nvidia can’t make any guarantees about it since that particular combination has not been tested. Understood!