does Tesla support SLI?

to my knowledge, Tesla do not support SLI, maybe not needed.

what if my kernel function running on 1 GPU depends on the data of another one?
do I need to copy it from one device memory to another?
or memory can be dumped, somthing like that.

SLI isn’t supported, and neither is any kind of implicit memory sharing between devices or device to device communication. If you want to move data from one GPU to another, you need to have the host copy it from one device to host and then from host to the other device.

Theoretically, you might be able to use zero copy memory on a pair of supported cards to operate directly on the same piece of memory, but that brings up a whole bunch of potential incoherence issues that would need to be extremely well managed to make work correctly.

thank you External Image
I’ll try the zero copy.

if the slow memory dump, device->host->device, is used,i doult how much i will gain from using 2 tesla compare to 1.

but what about 2 GTX285 in SLI mode? though the available memory size is equal to 1 card, it still sufficient in my application. Since the 2 GPUs work together as a stronger one, I think it will perform better in my case.

am i right??

In a word, no. SLI has nothing to do with CUDA. Two GPUs, whether they are on the same card or in separate slots are always separate CUDA devices.

oh,i c

and
is the memory bandwidth between 2 device higher in SLI mode? it aslo benefits.

No, devices cannot communicate directly even with an SLI link (which is btw very low bandwidth). All communication between devices must go through the CPU.

[indent][/indent]