Is it better to have one fast card or two slow(ish) ones? one good card or two lesser cards?

I am in the market for a CUNA compatible graphics card so I can play around with some CUNA development.

I am going to be buying a graphics card in the next month. I was wondering, I could get a GeForce GTS250 1GB (117euro) now and get working, purchasing another one in 2 months (2 cards), or I can wait and just but a GeForce GTX 275 896MB (239euro) when I pool my money together.

So, would the two cards be better in the long run, or the one good card? Looking for the best bang for my buck.

Of course google is useless on this subject, so I’m looking for opinions of people who are experienced.

I’ve only started learning about HPC so my experience is very little.
Any advice is greatly appreciated.


It depends… if you’re just trying to learn CUDA, then you can use a cheap card.

If you have a specific application, it is a bigger decision, and application dependent. Programming multi-GPU is harder (but worthwhile) but some applications don’t scale as well to multi-GPU.

When in doubt, be conservative and stay cheap… even if that’s a mistake, you can’t know it now, and it’s better than making the mistake of buying too much GPU that you don’t really need.

If you have very little experience then it will probably take some to time to master CUDA and parallel programming. So I would recommend: go cheap. By the time you’ve finished your app and you are ready for heavy duty stuff, newer, faster cards are available.

And GTS cards are still available, but GTX cards are hard to come by in Europe anyways… I’ve been waiting months for a GTX295 and my supplier has told me to wait no longer, cancel the order and start hoping (praying?) the Fermi-chips are available soon…

For me there are two criteriums: compute capability and cost. Speed does not matter at all. When I was buying my PC, GTX 260 was the cheapest 1.3 capable card and I am happy with it. When Fermi is out I will probably go again, for cheapest Fermi card.

The rumour mills have it that last lots of GT200b wafers were diffused by TSMC last year and there won’t be any more. So consumer GT200b cards have basically vanished (GTX260/GTX275/GTX285/GTX295) and won’t be seen again.

To the original poster, I would echo the other sentiments here and suggest getting a good mid range card to start with. The GT220 or GT240 offer all of the architectural features of the current Tesla cards except double precision floating point support. The GDDR5 GT240 looks to be a great CUDA or OpenCL development board, unless you use Linux. I wouldn’t recommend a GTS250 or GT9xxx card for development any more, just because there are good, inexpensive compute capable 1.2 cards around which are better from a programmability point of view.

Didn’t know that, they are not listed in Programming Guide for CUDA 2.3, but they are in PG CUDA 3.0 beta :)

Unfortunately they were not available a year ago, otherwise I would probably go for GT240 or even GT220/GT210…

Thank you all for the information.
I shall learn to walk before i start running :-)
Many thanks.


removing my comment, as I seem to have misread someone’s statement (confused Tesla/Fermi)