Shopping-list for Cuda GPGPU System in 800-1000 euro price-range Goal: A 'budget' GTX 470 (F

I am an AI-student in search for a complete parts-list to build a computer to do serious GPGPU on. I have yet to find such a list, hence this topic. The goal is to produce a complete parts list (not vague general advice) that creates a fully functioning CUDA-computer.

The constraints are:

    [*] Total costs between 800-1000 euro’s.

    [*] Possibility to upgrade the system in the future

=== UPDATED ===

Combining the information of this thread, I now come to the following parts-list. This system should be upgradable to a dual-card system, when the owner thinks it is needed, without having to change anything. It was also noted that the price of the GTX-470 is expected to drop in a month or so when providers are not terribly understocked anymore. So it is advice to wait a while before buying this system.

Total price: E 935.62 (in the Netherlands, it should be considerably less in US)

Graphics:

GTX 470 – ASUS ENGTX470/2DI/1280MD5

Euro 318,16

(choosing gtx470 over gtx480, 480 is about 25% faster but 50% more expensive)

Motherboard:

MSI 790FX-GD70

€ 145,16

(The 790FX chipset can provide a bus-speed up to 6 GB/s, which is enough even for transfer-rate-sensitive algorithms)

http://azerty.nl/0-1126-204551/msi-790fx-g…oederbord-.html

Processor:

AMD Phenom II X2 3.2 GHz (Dual Core)

€ 86,95

(This is a good processor, and it doesn’t need to be very fast as it is all about the GPU in this system)

http://azerty.nl/0-1127-257988/amd-black-e…processor-.html

Power Supply:

Antec TruePower Quattro 850W

€ 129,83

(A big power supply to make upgrading to a dual setup possible)

http://azerty.nl/0-1073-34843/antec-truepo…attro-850-.html

Memory:

OCZ Gold - 4 GB : 2 x 2 GB - DIMM 240-pins - DDR3 - 1066 MHz / PC3-8500

Euro 99,59

(4G memory should be enough. Memory is easily upgradable to 16G if needed).

Body:

Cooler Master HAF 922

Euro 84,63

http://azerty.nl/8-1044-216531/cooler-mast…?tab=tech_specs

(I just read a review where they build a high end double gtx 480 gaming system with this body. This model should thus have plenty of space for all components.)

Harddisk:

Samsung SpinPoint F1 Desktop Class HD103UJ 1TB

Euro 71,30

(randomly added a harddisk, just to make the system complete.)

If you have any suggestions, see any glaring mistakes or just feel like commenting on this system, feel free to do so.

  • Marijn

If you are seriously considering going to two GTX 470s in the future, I would not go with a Socket 1156 CPU/motherboard. That socket technology passes just one PCI-Express 2.0 x16 link directly from the CPU to the motherboard, and another, much slower link for the other peripherals. If I’m reading the MSI manual right, the motherboard you point out only has an x4 electrical connection to the second x16 slot, cutting your bandwidth to that second card by a factor of 4.

Your best bet for a future dual card setup is a socket 1366 system, which has a decent QPI link to the X58 chipset, with 36 lanes of PCI-Express fanning out. I don’t know if you can fit such a system into your budget, but I would definitely recommend it in the dual card scenario.

I would add that if a Socket 1366 is out of your budget (and it probably is), and you are serious about multigpu expansion, a Socket AM3 system with the 790FX/890FX chipset will get you a comparable number of PCI-e lanes. a similar core count and approaching the same per core performance for considerably less money.

It looks like that motherboard only supports x16/x4 or x8/x8/x4 PCIe x16 configurations. So, while you could put a second card in, you won’t be able to feed it with a full x16 lane. If you think you might be adding that second card, I would look for something that supports a full x16/x16 config.

Personally, I would find a motherboard that uses an nVidia chipset just to avoid possible interop problems. I’m not sure, but a mobo with integrated video might enable you to improve performance by driving the desktop display off the mobo graphics chip and freeing up the card for work.

There’s no really critical reason to use an NV chipset on development workstations, especially since that would mean a Core 2 or older Phenom (or are they using the same socket? I don’t even know). Regardless, X58/P55/790FX should be fine.

Agreed. I have both 790FX and X58 systems running CUDA apps, and the 790FX works very well, only falling behind the X58 slightly in PCI-Express bandwidth.

It is probably worth adding that a number of AM3/AM2+/AM2 motherboards come with more than one PCIex16 2.0 slot, however only those with either a 790FX, 890GX or 980a chipset will support x16 / x16 PCIe lanes.

Wow thanks for your reply’s! You guy’s really know your stuff. I knew building this system wouldn’t be that simple.

Based on this information I concluded that it would be infeasable to make the system easily upgradable to a double gpu setup. So the upgradability, if that is a word, should come from selling the graphics card in the future, and replacing it with whatever is the fastest at that time.

Are there any important features that my motherboard needs to have in the single gpu-card setup? Stuff like: If you don’t have feature X on the motherboard or type Y working memory that will become a bottle neck in the system?

  • Marijn

I think you can still allow for a multicard setup on a budget taking the AM3 route. Checking on Azerty (I don’t know any Dutch, but most stuff is in English):

MSI 790FX-GD70

http://azerty.nl/0-1126-204551/msi-790fx-g…oederbord-.html

€ 145,16

AMD Phenom II X2 3.2 GHz (Dual Core)

http://azerty.nl/0-1127-257988/amd-black-e…processor-.html

€ 86,95

So that’s ~35 more Euros than the CPU/motherboard combo you originally selected. You also would need a slightly better power supply:

Antec TruePower Quattro 850W

http://azerty.nl/0-1073-34843/antec-truepo…attro-850-.html

€ 129,83

But that setup, with the other parts you selected, should be expandable to a second GPU with no problem.

Thanks, great suggestions! I hadn’t even looked at an AMD-core!

I do have a question about the power supply. I have seen movies of the card taking as much as 430 watts, and seen topics that advice use of a (considerably more expensive) 1.2 kW supply for a dual-card setup. That suggests that a 850W supply is insufficient – do you (or anyone) have a suggestion on how to solve that problem?

You must have been seeing a measurement of the whole computer. A card powered by two 6-pin PCI-Express connectors can only draw 225W maximum. A card with 6-pin + 8-pin PCI Express connector has a maximum power draw of 300W. I’ve run two GT200 cards on that Antec supply with no problems.

I run a pair of GTX275s with a 95 watt TDP Phenom II and a bunch of fans off a Corsair 750W supply without a problem. That Antec should be good for a pair of GTX470s. For the GTX480, which will probably pull over 250W per card, you will need a supply with two 6 pin PCI-e and two 8 pin PCI-e power lines, preferably all on different rails. You might need to go up a little in power for that.

That is reassuring. The discussion suggesting the use of a 1.2kW supply concerned running two possibly overclocked GTX480, so that would explain the extreme power supply demands.

The last thing that concerns me is choosing working memory. I can imagine that choosing a wrong type of memory could become a bottleneck in processing large amounts of data – e.g. when lack of a certain motherboard feature limits the system memory and GPU memory in working together. Is this a valid concern?

To be honest, I think some of the people in that discussion might have a “drag racing” mentality about their computer parts selection. :) I use a 1.25 kW power supply to drive four GTX 295 cards, each of which use at least as much power as a GTX 480.

It’s not going to be a big deal in this case. Just make sure to install your DDR3 modules so you populate both memory channels. No need to fuss with overclocked or fancy RAM. The PCI-Express bus is the primary bottleneck in host-to-device and device-to-host transfers.

well, the chipset/bus becomes the bottleneck for concurrent bandwidth as soon as you have two 16x cards in use

True, though once you’ve settled on AM3, there’s not much more you can do to improve simultaneous transfers to both cards. Playing with RAM clock rates will be a 20% effect, at best. The best solution is to go X58 with triple channel, but that’s out of the price range. :)

it’s QPI that’s the problem, not memory bandwidth

And I populate both memory channels by pushing two memory banks in the right slots i’m assuming?

Also, I see that double precision performance on GTX470 is artificially lowered. This does not directly pose big problems to my AI algorithms (I am learning neural networks, not simulating water), but I still find it disturbing.

Does this mean that the GTX470 is a really bad choice if for some reason I need to do double precision calcultations? Or is it still a cost-effective card (e.g. double precision calculations / price)?

No, it’s great for DP. A $350 GTX285 has a DP throughput of 88 GFLOPS, and the $350 GTX470 has a DP throughput of 138 GFLOPS.

What people are wishing for is that they could get 500 DP GFLOPS at the same gaming-card price… but to get that you need to buy the much more expensive Tesla which uses chips validated and guaranteed for the successful 4X DP rate. The C2050 card gives 520 DP GFLOPS for $2500.

Like Tim said, it isn’t memory bandwidth that is the bottleneck, it is the HT link from the PCI-e controller to the CPU (and by extension memory). A lot of “enthusiast” AMD boards support HT link overclocking. I tried it and it really did boost the multi-gpu copy performance I was posting about a while back.