Hardware questions Experienced programmer, newbie to GPUs, seeking information sources.

Greetings,

New to the forum - I’m a physicist, just finishing my PhD, doing simulations of granular materials. As a graduation present for myself, I’m planning on getting a Linux box with some sort of CUDA enabled GPU, to enable me to do more simulations without any strings attached. I’m currently using a cluster at ANL, but won’t have access to it once I’m done (unless I get a post-doc there.)

I’ve done some basic research, but it seems like there’s a slew of choices - at the one end, there’s the Tesla lines, which are very high performance dedicated computational cards that are out of my price range (although some C1060s may hit the market once the C2050 and C2070 are available, just around my prospective purchasing time, and may drop to a price where I could consider them.) On the other side, one can get CUDA enabled cards like the 9800 series or the GTX 295 and it’s siblings, for a few hundred dollars, but they don’t have the horsepower that the big boys do. And there’s the quadra line in the middle, and then I’d need a box to put them in, which concerns me a little (while I’ve done experimental work, I’m also a mac guy, and haven’t ever build a PC.) Thinking of running headless on my local network, so most other hardware is not important (I think, …) Or am I just out of my mind trying this?

Anyway, right now I’m looking for recommendations as to where to look to continue doing research for this purchase. I plan on writing some new C code to do the simulations, and don’t want to use windows - mac would be OK, but would probably raise the price a lot, so I’m looking at linux first.

Replies here in the forum, please - I’ll check it every few days. Thanks in advance.

Regards,
aeronaut

Depending on your definition of “horsepower” the Geforce GTX 275/285/295 consumer cards actually have more than the current Telsa 10 series, not less. They are actually a bit faster than the Tesla versions (both in memory and shader clocks), but have less memory. I use self made “whiteboxes” with Geforce GTX 275/285/295 running Linux for all my simulation work - both in a cluster and standalone workstation class machines. It is by far the most cost effective way to do GPU computing.

The workstation class system I do most of my development work on uses a pair of stock GTX275s sitting in an AMD 790FX chipset motherboard with a single quad core AMD Phenom 945 and 8Gb of DDR-1333 ram. Comparing prices from region to region can be hard (I am in Europe), but I would guess it work work out somewhere in the $1300 range in parts from NewEgg or similar. Took me about about a 90 minutes to get from delivery of the parts until the first finite element simulation finished, and I don’t consider myself all that expert.

Thanks, avidday, that’s exactly the kind of info I need.

A few followups;

  1. Do you have any problems using more than one of the GTX cards in a machine? Does the CUDA code recognize and use them all?

  2. There seem to be several flavors of GTX cards out there. Does one work better than another, is one more reliable, and/or run cooler, etc.?

  3. Why do you prefer the 275, when the 285 seems to have better stats. Is it price/bang for the $$$?

  4. What about the GTX295? Does the second graphics chip and slightly slower everything else justify the price increase?

Regards,
Aeronaut

No problems whatsoever. I can use both GPUs for multi-gpu apps, or use one as a dedicated display/rendering card and the other for CUDA, or use the non-display card for interactive source debugging with cuda-gdb. All worked straight out of the box.

I don’t really have an opinion about that except to say that I have always chosen non-overclocked models based on the NVIDIA reference design. CUDA is pretty good at exposing hardware faults, so my strategy has been to be a little conservative. So far it has worked out OK. I have more than a dozen reference design, stock clock GTX 200 gpus from several different manufacturers in daily use as CUDA compute devices, and they all are working fine.

For the development box, it was the cheapest way to get 240 cores, and the 1792Mb versions are the cheapest way to get more than 896Mb of memory per GPU. No other reason.

I don’t have any experience with GTX295s. A lot of people who work on “embarrassingly parallel” problems favour them because they give the highest GPU density for a given number of motherboard PCI express slots. My work doesn’t fit into that category, and I prefer more memory per core than the 896Mb you get on the GTX295.

  1. Do you have any problems using more than one of the GTX cards in a machine? Does the CUDA code recognize and use them all?
    I do unfortunately, but I’m assuming that’s a programming error. When one application is finished (and starts freeing memory) another application segfault, if I run several in parallel.

  2. There seem to be several flavors of GTX cards out there. Does one work better than another, is one more reliable, and/or run cooler, etc.?
    Haven’t formed an opinion here, but avidday’s reasoning seems good.

  3. Why do you prefer the 275, when the 285 seems to have better stats. Is it price/bang for the $$$?

  4. What about the GTX295? Does the second graphics chip and slightly slower everything else justify the price increase?
    We have preferred the GTX295 for the high “compute density” it offers. Assuming stability issues are worked out, we’ll be considering building one of these: [url=“http://fastra2.ua.ac.be/”]http://fastra2.ua.ac.be/[/url]

On a slightly different topic:
If I were to start today instead of a year ago, I’d give OpenCL a serious look. (As also noted on the fastra site)

Edit: [url=“The Official NVIDIA Forums | NVIDIA”]http://forums.nvidia.com/index.php?showtopic=106266&hl=[/url]