@ework This question is very tightly related to my previous one. In this case however I believe Nvidia should provide a way for users to use a “tested” configuration. How has Nvidia tested it if there are no packages available?
You’re correct cuDNN 8.2.1 does not have a CUDA 11.4 build available. Although the CUDA 11.3 build of cuDNN works fine with CUDA 11.4 applications and that is the version which has been used to test TensorRT 8.2.0. CUDA and deep learning libraries are moving towards supporting CUDA enhanced compatibility, which will allow builds from earlier CUDA toolkits to work with newer CUDA toolkits . We are very close to having this fully supported by TensorRT and cuDNN.
Thanks for the clarification. As a user, it’s confusing to have a mix of libraries - we usually strive to have consistency and keep everything nice and tidy. In other words, use CUDA 11.4 for everything is what makes sense, not one library with 11.4 and another with 11.3.
Regarding compatibility, the link that you point to talks about “drivers”, not libraries, or? CUDA 11.5 requires driver 495.x, however it supports older drivers. I don’t find it stated that the CUDA libraries (not drivers) are compatible with one another.