If I buy two cards for CUDA computation two gpu cards

Is there any restrictions, e.g. to Motherboard? I saw in WIKI:

I don’t know if this issue was old, and I want to hear about some experience for many-cards COMPUTATION, e.g. it seems impossible to exchange data between the cards?

CUDA does not use SLI in any way, so there is no restriction on the motherboard you install the cards into. (I have two cards installed in a motherboard using AMD’s chipset.) In fact, prior to CUDA 2.3 (which is in private beta now), you have to turn SLI off in order for CUDA to see both cards.

Programming multiple cards with CUDA requires that you run one host thread per card, and partition your workload between them manually. Data cannot be transfered directly between cards, only between a card and the host CPU.

Thanks! :thumbup: