CUDNN and multi-GPU parallelism

Hi,

I’ve found some conflicting information on this on the internet. Can anyone comment on whether CUDNN currently supports parallelizing deep learning over multiple GPUs installed on a single host?

Thanks so much!

The CUDNN library supports only one GPU. However multiple cuDNN handles can be created to deal with multiple GPU in a single host. But it is up to the user or the Deep Learning framework to manage them