Hello. I need a high-performance system for some king of calculations. Initially, I wanted to create a cluster from ordinary PSs. By chance I stumbled on CUDA technology, and i decided try it. At first, i want to test my algorithms on gpu, find critical places and analyze cost/perfomance ratio. What card must i choose for testing period? I plan to spend on a video card around $300-400. I must choose the card with a maximum number of CUDA cores? What are the differents between GeForce and Quadro models (i know, that “Quadro” positioned as a professional cards, but what are real perfomance differents?)?
Quadro FX cards are equipped with heavy rendering capabilities than GTX280… So they are much capable for high end Video editing applications…
Some of the features that Quadro FX card have are:
-
128-bit color precision
-
Unlimited fragment instruction
-
Unlimited vertex instruction
-
3D volumetric texture support
-
12 pixels per clock rendering engine
-
Hardware accelerated antialiased points and lines
-
Hardware OpenGL overlay planes
-
Hardware accelerated two-sided lighting
-
Hardware accelerated clipping planes
-
3rd generation occlusion culling
-
16 textures per pixel in fragment programs
-
Window ID clipping functionality
-
Hardware accelerated line stippling
originally posted by Meodowla
Depends on a number of factors. Do you need double precision? If yes, you need at least cards with compute capability 1.3, i.e. GTX 260 upwards (but not the relabeled crap of the 300 series). If you do heavy double precision arithmetic, you might want to wait for the Fermi based Teslas (although you will likely want to start toying with some other cards, as Nvidia hasn’t even set a release date yet).
If you need upwards of a gigabyte of memory per device, you need to head for Tesla. You might also want to do that for improved reliability (by the way, never use an overclocked card for Cuda).
Speed will either be roughly proportional to the number of cores times the shader frequency if your kernels are compute bound, or about proportional to the memory bandwith (which is proportional to the bus width times memory clock), if they are memory bound.
To familiarize yourself with Cuda and assess the achievable speed, I’d in general recommend a card in the GTX260 … GTX295 range (i.e., compute capability 1.3). If you are lucky enough to get hold of a GTX470, you might want to try Fermi’s new features as well (though I doubt you will be able to set up a cluster based on these at the moment).
I suggest GTX 260-core216 or GTX 285. The second one has better performance but naturally is more expensive. Both are Compute Capability 1.3 cards meaning they can do double precision and have less strict memory access rules.
Generally, when it comes to card selection heuristics number of cores is a good approximation. Memory bandwidth is another. Some cards, like GTX 295, are two normal cards strapped together in one case - those will often be marketed as having twice the number of cores and memory. Working with such card is like working with two separate cards from the programmers point of view - your code won’t automatically become twice as fast, you have to explicitly code for multi-gpu parallelism. It may be a good practice before building a cluster.
I advise against buying a Tesla or Quadro card for initial development, GeForces are more than enough and are much cheaper.
Good point! I forgot about that, but the relaxed memory coalescing rules can easily become the difference between success and frustration, particularly when starting with Cuda.
So, if i don’t need render any data (my task are similar to md5 bruteforce and neural networks logic), Quadro card is useless for me? E.g. GeForce card (with such perfomance as a Quadro) will be cheaper?
Thanks to all, a think, that’ll pick GTX285 or smilar, this should be enough for our acquaintance :).
Basically yes.
And even if you had to render things, GeForces are more than capable - they are graphics cards after all ;) Quadro cards are basically cards tuned for CAD software, have more sophisticated OpenGL drivers. They have no benefits that would show while introducing you to CUDA and are more expensive.