What videocard should i buy?

Hi.
I had 8800GT and “CUDED” at it, but now it’s broken…
I’d like to hear what should i buy. My purpose is not to play, but to code.
Should I buy something cheap like 220GT, or middle (250GT), or I have to buy new fermi (465)…

Thank you for your answer.

for compiling code I would want a fermi GPU. The fermi GPUs have a different architecture very similar to the tessla GPUs and are much better at number crunching than the GTX 200 series GPUs.

I have a GTX 470 running protein folding simulations and its more than twice the number crunching power of the GTX 260. The GTX 260 runs around 200 dollars on average.

If you look at newegg the Zotac GTX 470 is on sale for 20.00 instant rebate with another 30.00 dollar mail in rebate, bring the price down to 299.00. Now thats about 40 dollars more than the GTX 465 for a much better card.

Link to Zotac card on newegg.
http://www.tigerdirect.com/applications/Se…&CatId=3669

When you take it to the cart it will show you the true price.

EG

Wow, so fast…

Thank you for your help EG.

Looks like I’ll buy 470 :)

I would kinda hold off and get some more input before you jump and purchase. While I do know something about what you intend to do I am far from being a programmer like you. I am heavily into Folding At Home for the most part. There are many more on here that have way more knowledge than I in the programing area.

I am not sure that the way these protein folding simulations utilize the GPUs engine quite the same. I just want to confirm I havent steered you in the wrong direction.

EG

No, your thoughts were exactly as mine, it’s right direction for me, thanks again EG.

For someone looking to get into CUDA, my recommendation would be (depending on how much you want to spend):

  • ($100) GT240 with DDR5, easy to slide into most cases
  • (~$150-200) Used GTX 200-series card, if you know someone cashing out to upgrade
  • ($250) GTX 465, or a discounted GTX 470 as was suggested

Moving down the list increases your compute capability (1.2, 1.3, and 2.0, respectively). Appendix G of the CUDA programming guide lays out the differences. Between 1.2 and 1.3, all you get is double precision floating point capability. From 1.3 to 2.0, you get a lot of new interesting features that people are still exploring. In my mind, the biggest wins are the L1 and L2 cache, from which most of the other benefits of the Fermi architecture derive. If you think your application would benefit from caching (and you are not interested in developing for users with older cards), then I would consider the GTX 400 series the best starting place.

I’d wait for the GTX 460 which will be based on the Fermi architecture, but using a smaller die (cheaper to produce).

The specs have been leaked already (see news postings on various hardware news sites). It will be available in two

memory configurations, I quote the 1GB model’s specs, as known so far.

GeForce GTX 460

CUDA Cores 336

Graphics Clock (MHz) 675

Processor Clock (MHz) 1350

Memory Clock (MHz) 1800

Memory Amount 1 GB

Memory Interface 256-bit

Memory Bandwidth (GB/sec) 115,2

I tend to agree here but if I am correct we wont see the GTX 460 until sometime in the fall. It looks like maybe the GTX 465 may become obsolete very fast once this model is released. One of the GPUs currently on the market will surely go the way of the GTX 280 and hit EOL fairly fast.

For what I do the GTX 470 is about 25% faster than the GTX 465 at 40.00 to 50.00 dollars more. But for number crunching and fluid simulations and such this series of GPU is a monster.

Please bear with this…

Some people like to play with code…Some people code for playing (like developing games).

So, if you are going to code for some game using CUDA, you should consider a good high-end video card

Thanks all. :)

I ordered 470 MSI. I’ll have it very soon.

I hope your purchase exceeds your expectations.

Good Luck

EG

Ahem, the summer has really just begun, and this week I expect mine to be shipped ;)

You are quite correct. I always give a date further away than what I personally expect. That way if they drop the ball I dont get egg on my face.

Same reason Nvidia doesnt discuss unreleased hardware, or at least thats there stance.