I guess there aren’t many you can choose from. When you have four GPUs connected, usually bandwidth would be a concern because the bus lanes will be shared among all slots. But if you dont care about the bandwidth, I listed two mb that I know supporting four PCI-e, whether in PCI-e 1.0 or 2.0 or hybrid mode:
ASUS P5K64 WS
TYAN S2915
I have S2915 for myself, which can operate at 16x/8x/16x/8x, according the specification. However what I observed in cuda, after installing 4 GPUs, is that they’re all running at 8x speed (under Linux). Not sure if it’s my setup problem or not, but…just be aware of that potential problem…:P
I guess there should be some better motherboard for ultimate cuda cluster setup. Any suggestion? Thanks!
I think another option could be the Skulltrail MB from Intel. My question is that if the Skulltrail can run two 9800 gx2 card plus 2 8800 GT for a total of 6 CUDA devices.
This could fit nicely with the fact that the Skulltrail can accomodate 2 quad core xeon for a total of 8 cores. This monster could sustain a peak of 5TFLOPS for less than $10k. Anyone knows if this is feasible or not?
I don’t know what is officially supported, but the current release does work with more than just 4 GPUs. One of the guys at Tycrid has our program VMD running with 6 GPUs using one of their PCIe backplanes:
Info) VMD for LINUXAMD64, version 1.8.7a19 (March 19, 2008)
correct me if I am wrong, but you can’t physically fit more than two Tesla boards on any existing mobo (even skulltrail) because they are double wide and the bottom two PCI-e slots are adjacent (have to use the first slot for something w/ video out). The most you could fit is 4x single wide or 3x double wides. :blink: