Hi,
I want to set-up a multi-gpu system using 2 GTX680.
My requirement is I want to develop a code that is scalable across GPUs.
To start with I am planning to set up a 2 GPU system, but the same code is supposed to run on a many-GPU system.
Please tell me what an SLI will do.
For power supply, I need a 650W and 4 6-pin PCIe power connectors.
Plese tell me if there is any other special requirement for set-up like RAM, mother-board , etc.
You don’t need SLI to do multi-GPU programming. In fact, you probably don’t want to use SLI, as you’ll need to differentiate between the different cards when splitting up your computation. Other than that, there aren’t really any special requirements for a 2-GPU system; you’d probably want a motherboard with 2 PCIe 3.0 x16 slots for your cards, but that’s not too special.
Until recently I had a dual-GPU setup with 2 680s 2gb. Got around 930-998 Gflops(on a single 680) for cuBLAS Sgemm which is impressive, and they really perform well in PC games.
I agree with alrikai about SLI, no need for it and it can cause problems.
Very nice setup, and unless you really need fast double precision that is the best route. Just make sure you get the correct motherboard, and spend the money on a nice CPU, as it still plays a big role. Something like the Intel i-7 3770 3.5 ghz or better.
Yes completely agree with that.
I have a GTX 660 with intel i7-3770 @3.4Ghz.
The performance of the machine is very good.
This time I will get a 3.5Ghz or above.
Thanks.
No, you will see two separate GPUs. You have to copy data to each of them separately and launch kernels separately. It is all up to the programmer (=you) to handle the two (or more) GPUs.
No, you will see two separate GPUs. You have to copy data to each of them separately and launch kernels separately. It is all up to the programmer (=you) to handle the two (or more) GPUs.
[/quote]
@Gert-Jan: Please explain what a SLI will do with a programmer’s point of view. If what you said is correct, then the SLI is just a dummy.
Please elaborate.
I have a GPU code that takes around 8 secs on a GTX 680.
The multi GPU version of it takes 4.2ssecs with 2 streamson dual 680.
My problem is it runs fine on Win7 server with 2GTX680[4GB each].
“But when I run it on a Win7 ent, 2GTX680[2GB each], the system restarts.”
And after few runs the .cu file becomes a random binary and build fails.
Any idea why this could be happening.
My problem uses less than 2GB memory and the kernel is getting called for 128 times in a single GPU
and 64 times with multi GPU.