Question: CUDA with multiple devices in SLI mode

In NIVIDIA CUDA Programming Guide 2.1:

The use of multiple GPUs as CUDA devices by an application running on a multi-GPU system is only guaranteed to work if these GPUs are of the same type. If the system is in SLI mode however, only one GPU can be used as a CUDA device since all the GPUs are fused at the lowest levels in the driver stack …

So …, if I have 2 GTX 285 (each has 240 processors, 1GB memory) work in SLI mode:

  1. Does CUDA recognize them as 1 device with 480 processors and 2GB memory or 1 device with 240 processors with 1GB memory?
  2. If CUDA recognizes them as 1 device with 480 processors and 2GB memory, are the threads automatically distributed to the 480 processors when a kernel is launched on the device? Is the 2GB memory shared by the GPUs or each GPU has the copy of the same data?

Right now, if you have two cards in SLI, you see one card (not one logical card comprised of the resources of two devices, but one physical card only).

Thank you for the explanation. I misunderstood the manual and suggested that the cards are fused into one logical card comprised of the resources of two devices.

There’s no need to use SLI for CUDA to begin with. You can assign your application to split work to multiple GPUs already.

Or you can switch to OpenGL or DirectX and program something that supports SLI. That would probably lose quite a bit of performance and time though.