Can cuDNN automatically utilize several GPUs for the one function?

As known, RTX 2080/Ti/Titan supports SLI that can share their memory at different addresses in a flat address space NVIDIA SLI GeForce RTX 2080 Ti and RTX 2080 with NVLink Review - Conclusion | TechPowerUp

With NVLink things have changed dramatically. All SLI member cards can now share their memory with the VRAM of each card sitting at different addresses in a flat address space. Each GPU is able to access the other's memory

Can cuDNN automatically utilize several GPUs (RTX 2080/Ti/Titan overy NVlink) for the one function, for example cudnnConvolutionForward() ?