Hardware for GTX480 cards?

I’m gonna go pick up a number of GTX480 tomorrow, and I’m wondering what other hardware I should buy. There’s plenty of motherboards that support several PCI-E 16 so I’m basically wondering how many cards I should put in each computer to be cost-effective.
I’ve been using GTX295s until now (and will continue to do so far a while) so I know what my basic hardware requirements are (CPU, amount of RAM, harddrive) but I’m not sure what role the chipset plays in all of this for example. Is there something else I should keep in mind? Have you already built something around the 400-series and could you give recommendations what (not) to buy?

Edit: Initially I had hoped to get more than 2 cards into one computer, but I realise that’s not practically possible because of the power consumption. (450W * 3 = 1350W)
Edit again: Never mind above comment, I read the line for power consumption of the system, not the GPU. Thanks for clearing that up guys.

The TDP of the GTX 480 is 250W, not 450W, so more than 2 cards should be possible.

Power wise, the 295 and 480 are very similar… if you have a GTX295 rig, you can swap them out for 480s with no changes.

CUDA applications use significantly less GPU wattage than graphics applications. 2/3 the power is a rough estimate.

So we’re very lucky. A GTX480 uses up to 300 Watts… but that’s for GRAPHICS loads. In practice, for CUDA apps, it seems to be more of around 200 watts.
I have noticed this on two different apps during 24/7 long runs. My 3x GTX295 system would draw only about 820 watts from the wall (and that includes PSU inefficiency, CPU, hard drives, etc!)

I am running 2xGTX480 and 1x GTX295 with a i7 980x CPU and it’s still under 900 watts running CUDA apps only. If I fired up some games I’m sure it’d push my 1200 Watt PSU.

It would be interesting if people compared their wall wattage uses under loads and at idle and see if this lighter power use is pretty constant for all CUDA apps.

Power wise, the 295 and 480 are very similar… if you have a GTX295 rig, you can swap them out for 480s with no changes.

I am running 2xGTX480 and 1x GTX295 with a i7 980x CPU
What motherboard are you using?

P6t WS revolution, so x16 x16 x16. But I also run some 3-card rigs with the cheap vanilla P6T or even P6T-SE… the third slot is only 4x but it doesn’t hurt my application.

I ended up with P6T SE, Core i7 920, and 3x2GB RAM, running 3 GTX480s

Thanks :)

Okay, say you build a rig with three GTX 480 using a 800 Watt power supply.

Does it mean you can start a fire by launching Furmark ? ;)


I know you’re just joking but the actual answer is interesting.

If you started your 3xGTX480 800W machine, it’d probably boot normally and work OK when unloaded… Idle power could be something like 600-700 watts… it’d depend on your drives, CPU, and motherboard.

So what happens when you start Furmark and your actual usage goes to 1100W?

I suspect (but cannot say conclusively) that the PSU would start providing power and within 1 second it’d fail to provide enough watts, enough that the cards (and maybe CPU) would crash. So you might get dumped back to desktop, you might have the display system reset, you might have the machine crash and reboot, you might have the machine freeze.

Where it gets interesting is with a HIGHER power PSU which is just barely past its limits! Say a 1000W PSU when you’re using 1100W. I think THAT is when you worry about fire! Because your load might be satisfied enough that the cards function and now you go from an instant failure to a long term torture test scenario. That’s where some overheated part might fail… perhaps in the PSU, but maybe in the motherboard or CPU.

Modern electronic power regulators (VRMs) are actually pretty good about handling too-low input voltages. For example your CPU is fed 5V, but uses switching via VRMs to drop that to the ~1.5V the CPUs actually use. This is done by interesting solid-state switches which at kilohertz frequencies charge up a capacitor to 1.5 V by connecting it to the 5V and breaking the connection once the cap reaches say 1.51V and then reconnecting it again when it’s dropped down to 1.49V. This cycle happens so fast that it looks to the CPU like the capacitor is just a constant 1.5V voltage source.

This switching is also why you can change voltage by software. So even if the overloaded PSU provided voltage sags to 4V, the VRM can still adapt and still provide 1.5V to the capacitor… it just needs to leave the “fill” switch turned on longer. Now this longer duration is what causes stress… more duration means more CURRENT and power (wattage) goes as current squared. So you get more heating than normal in your wires, your power sockets, your VRMs, your motherboard traces, etc. In my example where sypplied voltage dropped from 5 to 4, you need 1.25 times the current, which means over 1.5X the heating.

So this is why it’s more physically dangerous to be working at load or just beyond load… you get MORE heating from wasted current when the voltages have dropped below he 5V/12V spec than when they’re correct. So failure is more likely in those situations. This failure can be in almost any component of the PC…inside the PSU, at the socket connection, inside wires, inside capacitors.

This is likely what happened to my UPS this week.

TL;DR. A way-overloaded PSU will just crash, a slightly overloaded PSU can be a fire/smoke/destruction risk especially over a long period at overload.

Well I’ve had a 350 Watt power supply do the “flash-bang-smoke” thing on me while playing Crysis on an nVidia 8800GT. So it took the load for 20 minutes, then it folded spectacularly.

I am currently having a 750 Watt power supply causing spontaneous reboots with 2 GTX 260 and a GT 240 while under gaming load (SLI configuration, GT 240 only for PhysX) - whereas it works fine doing CUDA load on all three. In theory the power supply should be just about able to handle it ( I measured 480 Watts of peak usage for the entire system with a Kill-a-watt type power meter ).

So I am not entirely joking, I’ve seen smoke before. Now I am just seeing spontaneous reboots.


Sounds like the 350 watt supply was actually providing more than 350 watts. This makes sense, since 350 watts is just the maximum safe load, so it could actually produce more power, but doing so would create more current than the wiring could handle (being thin cheap wiring), thus resulting in massive overheating. Being a cheap powersupply, it likely didn’t have a safety switch, and so simply continued running until it caught on fire.

The more expensive 750 would apparently have a safety switch, which would be what’s causing the reboot - an internal sensor detects that it’s reached too high a temperature, or perhaps just too high a wattage, and briefly switches off the power, thereby protecting the power supply. Thus, it would be unlikely for such a power supply to actually catch fire since the safety switch should turn it off first, provided there isn’t some manufacturing defect messing it up.

Thanks for this post! So you think I’ll be safe picking up a GTX270 to put in a dell precision w/ 525W PSU, though the card recommends 550W? I was afraid we’d be stuck using a GTX260 in it.

Measure your current wattage with a plugin Kill-o-Watt. That will tell you your power leeway. Assuming the Dell PSU is reliably a 525W supply, you can see how many watts are in your budget.

But as always when adding a GPU to an existing system, check your motherboard for x16 PCIe support, and your CASE for both cooling and size constraints.