I am researcher and need an upgrade to my 8800GTX. Now I was gonna wait for the C1060 to release but the only advantage I see is that the Tesla has more (4GB) dedicated memory as opposed to the 1GB of the 8800GTX I’m considering. Since my application doesn’t need much memory at all (couple of MB) I am asking if there are other advantages of the Tesla? Like more shared memory? More registers? Or more double precision FPUs?
I can’t find those information.
So if I don’t need the global memory, why would I go for the Tesla? Is that the only difference?
The only difference I’ve seen reported is larger device memory (as you mention) and better reliability testing. (Full disclosure: I’m not aware of any 3rd party verification of the latter claim. We still need a standard CUDA torture test app.) I believe tmurray mentioned that they deliberately underclock the memory on the Tesla cards relative to the gamer cards to improve reliability.
There have been a few reports of the gamer cards, especially the overclocked models, having CUDA problems. It seems to be very rare though, and so I consider the gamer cards totally acceptable for CUDA R&D. Perhaps in a deployed product, or business-related computation, the improved QA in the Tesla series would be worth the extra money.
There is big number of differences. To begin with C1060 has compute capability 1.3 and the 8800 has 1.0. Just go to the programming guide to see the differences between the different compute capabilities.
And also there are more hardware differences: C1060 has more multiprocessors, double the registers, etc.