4x 9800 GX2 possible? Quad 9800GX2 specification

Card #4 could always easily be relocated to slot 9 and 10 of the bigger case, using the PCI riser connected to PCIe #4 and the PCB of the riser screwed away on the case.

The only problem will be that cards #1 and #2 will be next to #2 and #3.

There are several brands of very think water block which can fit between the two daughter boards wihtout increasing the width of the card. Koolance and Innovatek are two brands out of my head.

Koolance’s and Innovatek’s blocs have on both sides of the block (with caps to block the unused side). You can also remove the caps and put instead rigid pipes to connect adjacing blocks together.

Using SLI-pipes, you could connect parallely the first two board and the last two boards, connect those serially and put (larg diameter, high flow rate !) water in and out at each end of the structure.

Aqua-Computer’s AquagraFX blocs even have their holes laterally giving even much more connection freedom.

To keep costs of water cooling down, it could be possible to water cool only those 2 cards that can’t be cooled with the stock cooler and let the other cool using air cooling.

To stay with air cooling: the top 2 cards have their air in-take are obstucted by the next card. It could be possible to remove the radial fan while keeping the original radiator, and mount a couple of classic fans to take air from the sides instead (and try to make sure that the air flow is suffisent - maybe taping beteen the cards and the fans to make sure that all air is forced through the radiators). (In fact, from the pictures I’ve seen on the web, it seems that all the cards inside a Tesla have only radiators and rely on the airflow forced by the fans on each side of the box).

Anyway, good benchmarking and temperature control should be done.

Following up on DrYak’s suggestions, I sent emails to various mfg’s of water blocks for 9800GX2 asking if they will fit side by side on double spaced (1.60 center to center) PCIe slots.

Replies, so far:

Koolance YES
Aqua-Computer YES
Innovatek YES
DangerDen NO

Waiting for replies from EK and EVGA. I think EVGA is going to use Innovatek blocks so they are probably a YES.

With cards packed this tight I would be reluctant to try air cooling. These cards dissipate about 270 watts each!

Skippy

Just came across an interesting post on Tom’s Hardware relevant to this discussion:

[url=“four 9800GX2 cards: will it work? | Tom's Hardware Forum”]http://www.tomshardware.com/forum/249450-3...0gx2-cards-work[/url]

He has a photo of a pile of parts: 4 9800GX2, 1500 watt PSU, MSI K9A2 motherboard, Lian Li case. Promises to post results when he gets it assembled. He is planning on running CUDA. I didn’t see a fire extinguisher in the photo. Look froward to seeing how he does.

Skippy

I think the water vs air is a good point. For me, water is out of the question due to the machine being in a shared dedicated (air-conditioned) room with lots of other bits of kit (such as an 8-CPU 16-core 32GB ram server). From what I have seen, the 9800GX2s take most of their air in the sides of the board at one end, channel it down the middle, and vent it out the back of the machine. Therefore, the biggest problem is when putting two of them side-by-side since they would block half of each others intakes. In a 10-slot case, 8 of them would be taken up so there could be 2 slots empty.

This is one possible layout, excuse the ASCII art …

*  PCI-E sockets

A 

|  9800GX2

V

>>> PCI-E riser

+-----------+

|           |

|           | 

|           | 

|           |

+-----+     |   

|#####|     | * A

+-----+     |   |

|#####|     |   |

+-----+     |   V

|#####|     | * A

+-----+     |   |

|#####|     |   |

+-----+     |   V

|     |     | * >>>>>>+

+-----+     |         |

|#####|     | * A     |

+-----+     |   |     |

|#####|     |   |     |

+-----+     |   V     |

|     |     |         |

+-----+     |         |

|#####|     |   A <<<<+

+-----+     |   |

|#####|     |   |

+-----+     |   V

+-----------+

There are other ways to arrange things, the optimum would depend on the exact case and motherboard/cpu fan/casefan arrangement.

I think these are designed to operate at pretty high temperatures in normal use. My current Tesla card is 69oC when idle, and can hit 89oC when under heavy use (on very bottom of case, so airflow is not great). The default for the overheating alarm is 110oC :o There have already been reviews and people using two adjacent 9800GX2s for quad-SLI for gaming (the SLI-bridge is not long enough to go further) so two should be okay.

Oh, by the way, when I say PCI-E riser card, I mean one of the ones with a length of cable (looking at about 8-10" or so to be safe) between the start and finish rather than a plain riser. That means the PCI-E sockets on the motherboard are not where the card has to be located on the case.

the EVGA 132-CK-NF79-A1 looks line it can easily handle a 3 double width setup. You might even be able to stick in a slender PCI-E x1 video card so you won’t have to worry about confusing the poor OS with 6 identical video cards, only one of which you plan to use for video. You would be stuck with only one quad-core CPU, but if your problem is light enough on upkeep that the bus won’t choke, the CPU probably won’t either.

Although passive backplanes with multiple x16 slots exist, they aren’t designed for use with multiple PCI-E x16 slots per processor since they are dedicated to “business needs.” I tried looking for an all x16 passive backplane setup for use with an array of Tesla cards, but found there is no such beastie. For the money, a 4U (because the 1200W power supply can’t fit in 3U) rackmount set of triple gx2 boxes on fibre channel is the best cluster formation at this time.

I checked the Intel processor compatibility list (http://processormatch.intel.com/CompDB/SearchResult.aspx?Boardname=d5400xs)about a month ago and the E5405 Xeon was on the list.

I checked the list again yesterday; I wanted to know if the L5420 and L5410 were supported. The current list has REMOVED six CPU’s: E5450, E5440, E5430, E5420, E5410 and E5405. Basically all the Xeon’s that retail for less than US$1000 each have been taken off the list.

Anybody know what gives here?

Skippy

Looks like someone has gone ahead an actually put one together.
Here’s the guy’s website to what he’s built looks very interesting
[url=“http://fastra.ua.ac.be/en/index.html”]http://fastra.ua.ac.be/en/index.html[/url]

Yes, and looks like it was build by these guys
[url=“http://www.tones.be/”]http://www.tones.be/[/url]

Just had a peek, and I (as the first) saw that they also are building a “Skulltrail” PC

/Lars

Hmm, good to know someone else has had success with it. I’m surprised that it is sufficiently cooled with 4 adjacent cards, even if they do have to keep the side off to keep it under 100 oC! I’d be interested to know how much noise it makes, and how much power it consumes.

Ive obtained some of the hardware, and hopefully by the end of next week should have most of the rest. Will post something here when it all arrives.

After looking at some picture : aren’t the GX2 using radial fans ? with a circular intake at both side of the card ? Radial fan can develop lots of pressure (big air flow).

If that’s the case, the whole behaves as if it was just like one big cylindric radial fan. And still manages to push quite some wind across the cards. It’s going to be noisy, but still efficient.

You could even imaging finding a way to force more air at both end of the “virtual big cylinder radial fan” (Fan blocks taken from an old broken air-dryer - of course, without the heaters).

And put a real big radial fan next to where the cards attach to the case (the Armorsuit has a grid there to attach such fans).

Other solution would be to fit extra fans on the pannel (the CoolerMaster Stacker 832 has a 2x2 grid of 12cm fans on its pannel. A similar system could be envisioned to cool the cards down).

All these solutions would be much cheaper than water cooling.

Last but not least, you’re not technically required to have the system on a desktop. Instead of going for Windows XP 64 and running the development tools (Visual Studio) on that machine, you can go for a bare minimal Linux installation (without X installed) run the server headless with the GeForce only doing 100% CUDA and maybe attach some old PCI Matrox with a riser to get a terminal put the thing into an air conditionned server room and access it over SSH.

If the ambient air is cooler, you’re bound to get lower temperature inside the machine as well.

… or use the wonderful NX remote desktop system. If VNC is a bitmap picture, NX is like a vector drawing. It’s bizarrely fast.

http://www.nomachine.com/

NX is amazing, I have to agree. Full screen X sessions over poor-quality DSL really blew me away. I use it to do most of my CUDA work these days.

Their licensing costs are rather steep, but the free 2-simultaneous user server is great. The FreeNX server (not by nomachine) tends to be a buggy, fragile mess, but if you can get it to work, it is free for as many simultaneous users as you want. :)

anyone have any luck with running CUDA apps on one of these systems?

http://www.engadget.com/2008/05/29/researc…-9800-gx2-card/

NX is mighty cool. BUT: I’ve seen performance degrade significantly when NX’ing into my devel box directly. ssh’ing to localhost again solved all performance issues. Only with CUDA, my GL/Cg line of code did not show this behavior.

Good to know - would you consider starting a new thread with this info? I’m anticipating a lot of NX usage myself, so … Thanks!

Ah, this is interesting. I use NX to connect to the head node of our cluster, but then SSH to the compute node with our CUDA devices. So I don’t run NX directly on the compute node, which may be why I haven’t seen this problem.

Hi you wrote on the first site that you need 8CPU cores for four 9800 GX2 (8 gpu) !?
I mean thats wrong, here a example from system with 8 gpu with one AMD cpu with 4 cores (AMD 9850 Phenom).
Mainboard ist here an MSI K9A2 Platinium with one AM2+ socket!

[url=“http://fastra.ua.ac.be/en/index.html”]http://fastra.ua.ac.be/en/index.html[/url]

For my System had the same case, its perfekt for those big systems (Lian Li PC 80) with the AMD 9850 black edtion, 4x 9800 GTX, Ram 4GB PC 1066 Corsair

xio

Nothing prevents you from building even a single core system with 8 gpus. The problem is that CUDA busy-waits for operations to complete on each GPU. If you have 4 cores with 8 threads busy-waiting, there can be some severe performance penalties. Some users have reported ~50% performance with a single core dual-gpu setup.

Yup. I believe that one could code around this for specific applications if you manage it all yourself, and it depends a on what your actually doing too.