I’m looking to setup a new machine that will support dual GPU setup (probably dual gtx570) and am looking for suggestions for a motherboard (or what is the current suggested chipset).
I’m looking to support two PCI-E slots at the full x16 speed.
I’m also looking to get a sandy bridge CPU, if that is compatible with the previous requirement.
One last thing, that will probably point to motherboard specifications and not just chipset, if possible I want a motherboard that will support working off the onboard GPU rather than the gtx570 so that I can use nsight without a second GPU, or on both GPUs.
And finally, how big a PSU do I need for such a dual GPU setup, will 800W be enough or do I need to opt for 1200W?
Unfortunately, these requirements are mutually exclusive. Sandy Bridge currently only has 16 lanes of PCI-E 2.0 integrated into the CPU interface and a much slower bus for the rest of the I/O. If you want to drive two GPUs at full bandwidth, you need to use a CPU that communicates with the chipset using HyperTransport (AMD) or QPI (Intel). The LGA 1366 CPUs (X58 chipset) from Intel will be the best choice, although the AMD option isn’t bad for the price. A second bonus to going with the X58 motherboards is the triple-channel memory configuration. My CUDA workstations with the X58 achieve 5.7 GB/sec host-to-device bandwidth with pageable memory. You need pinned memory to hit those speeds on AMD systems.
No experience with this, and I suspect that none of the X58 motherboards have onboard video. Although, if you get an X58 motherboard with several PCI-E slots connected to NF200 switches, you could install a single slot GPU along with your two GTX 570s without any significant loss in bandwidth. Motherboards without such switches have to split the PCI-E lanes when you go past 2 cards, cutting the peak bandwidth available in two.
800W is more than sufficient. As SPWorley has noted in the past, GPUs tend to use less than their peak power when running CUDA code (as compared to OpenGL/DirectX). Your pair of GTX 570s are likely to use less than 200W each, and the CPU+Mobo is probably another 200W at most. 800W will give you plenty of headroom assuming that level of power consumption.
When searching for parts for multi-GPU CUDA workstations, I tend to search for “SLI support” when looking at motherboards and power supplies. SLI has nothing to do with CUDA, but it tends to narrow the spectrum of possible components to those suited for multi-GPU systems.
I did think about the x58 chipset, but wasn’t sure how it compares to getting sandy bridge. Read some mixed reviews whether it is better going with the z68 + sandy bridge or the x58 with a core i7.
Looks like I found a motherboard that answers most needs, Gigabyte GA-Z68X-UD7-B3. It has the z68 chipset and two x16 PCI-E slots (via a switch on the motherboard rather than the CPU). Interestingly, it does not have an on board display output though, despite the chipset supporting one.
It is still dual channel compared to the x58 triple channel.
Sandy Bridge is probably better in CPU performance, except for the I/O restrictions. The proper Sandy Bridge successor to the Nehalem Core i7 & X58 won’t be available until at least the end of this year. It will use the LGA 2011 socket to provide a QPI link, 32 lanes of PCI-E 3.0, and quad-channel system memory. Assuming Kepler comes in a PCI-E 3.0 flavor, this should be quite the multi-GPU platform in 2012.