power supply issues

I have a MSI P6N Diamond board and I want to install four 8800 Ultra on it.
mechanically it fits. I have a tagan power supply with 1100 Watt, each PCI-E has 20 A*12V = 240 Watt , also if all 4 connectors are in use. I have now the problem that I need 8 PCi-E connectors… Can I use y-Splitter, if any exist and is 20 A enough ?

To test, can I use 4 PCi-E poser connectors from another computer ?

Thank you.

I’m not precisely sure of your setup, but the main thing you need to keep in mind is this: High-end power supplies are split into several “rails.” You can only draw a fraction of the PSU’s power from any given rail, and you have to balance your load across them. Also don’t connect different rails to each other. Everything else, like wires and connectors, matters less. Split away. However, keep in mind wires do have limits on how much current they can transmit before dangerously heating up. (That’s why the Ultra asks for 12 wires, but that’s overkill.)

So, read your PSU’s manual to find out how it’s divided into rails and how much current each one can take. A 175W Ultra needs ~10A 12V via connectors (plus another ~5A it gets via the slot itself). And as a final sanity check, touch the wires to make sure they’re not getting warm.

thank you, I have now at least 3 cards running, for the forth I need to do some handwork for the cooling …

by the way, the nVIDIA display driver utlilty is not working properly if 3 cards are enabled, it crashes … But I can compute on three cards in parallel, its fun !

i think what you need to do, j, is invest in some sli bridges and a 30" apple computation visualization unit. if you know what i mean.

I should try to get my company to invest in these items for my home machine. You know, for telecommuting…

Hi, I am also planning to set up a similar platform next month with 4 8800Ultra cards, but a question I have now is:

Although MSI P6N Diamond board has 4 PCI-E slots, only two of them are x16,

(total number of PCI-E channel of the board is 48) I wonder if 4 8800 cards are installed, will the different speeds of these 4 cards cause any problem?

Does your computer case have room to put an 8800 in the bottom-most slot of the P6N? That looks too close to the edge to hold a double-slot card.

If you do some handwork, it can be done. you have to cut something from the case and install additional ventilators, then it fits mecanically.

But I have different problems: if I run my simulation (without communication and IO over the PCI-E) on two cards, it is fine ,but on three cards I get a performance breakdown … even with the latest beta driver … The board switches to PCI-E x16 , x16 , x8 if three cards are installed and to x16 , x8 , x8 , x8 for four cards. As I got told from nvidia is that there is a performance loss (to du hardware not cuda) when the cards are not running on the same mode. I now try to switch all cards to x8 mode …

That is strange. I would expect an I/O performance drop due to the switch to x8, but there’s a raw computation performance drop? How much does performance decrease? Has anyone else seen this in a three or four GPU setup?

Hi somebody from the forum told me the solution:

you have to have one core per GPU to run efficiently (thread synchronization).

I plugged in a quad core and Wow! it works …

That explains why I haven’t seen a problem… I’m already using a quad-core in my prototyping setup. Thanks.