Multiple GPU processing and SLI

Dear All

Nowadays, can I program in CUDA for only one GPU (setdevice(0)) and the driver divide the workload for 2 GPUs automatically (see the 2 GPUs (Cluster) as one)?

Can CUDA take advantage of a SLI connection in this way?

Thanks

Luis Gonçalves

SLI and CUDA are mostly orthogonal.

CUDA doesn’t use SLI. At the CUDA level, there are no methods to launch a single kernel and have it automatically distribute across two or more GPUs.

Dear All

And cuDNN, can it split a model between GPUs or we have to do it manually?

Thanks

Luis Gonçalves

cublasXt and cufftXt can solve a single problem (e.g. a matrix multiply, or an FFT) on 2 or more GPUs “automatically”. I’m not aware of any such capability in cuDNN

[url]CUDNN and multi-GPU parallelism - GPU-Accelerated Libraries - NVIDIA Developer Forums

You might want to ask cuDNN specific questions in the cuDNN forum:

[url]https://devtalk.nvidia.com/default/board/305/cudnn/[/url]

But I do not understand. In the NVIDIA DGX-2 Professional Computing Solution site https://videocardz.net/nvidia-dgx-2/ states “81920 Unified Cores”. It seems all cores unified.

How this is achieved?

With NVLINK 2?