GTX 280 vs C1060

Hi all

I am unable to get the search feature to work, so I do not know if this has been asked before.

What is the difference between a GTX 280 graphics card and a C1060 card except for the amount of RAM and - indeed - the price?

Or said in another way - what do I gain from buying the C1060 over a GTX280 card?

br - Morten

afaik, the core is the exact same. so you “only” get a whole lot more ram at slower speeds (102GB/s vs. 141GB/s) and a little more number crunching power because of a higher shader clock compared to gtx280 (1080GFLOP vs. 933GFLOP)

This has been discussed to death.

But doing search is very easy. The trick is to use google. Eg use the search string:

c1060 vs gtx280 site:forums.nvidia.com

Hi

Tesla C1060 & S1070 doesn’t support DirectX & OpenGL.

It can only support CUDA Architecture(such as Scientific Computing,oil etc.)

GTX 280 supports DX OpenGL & CUDA Architecture. External Image

This is incorrect, Tesla supports the same OpenGL/DirectX features of the Geforce line.

gtx280’s should give a better price/performance ratio. three not four in one box, ok, that’s a drawback but one nicely compensated by a good bandwidth on a motherboard like evga 790i sli ftw or the p5t6 X58 from asus.

actually, why not three 295s? if pcie bus contention is not a problem for you (it may), applications would benefit from 1440 cores on 3 such cards.

In my farm, both gpus will find place, on separate machines: 280 mostly for quiet program development, 295 for noisy number crunching.

Another difference between the Tesla series and the other GPUs is the quality of the hardware. From personal communication with a nVidia employee I know that the actual chips that go to the Tesla GPUs are more reliable in terms of hardware faults, and of course that makes them more expensive because they go under more rigorous testing in the production process.

The reason the Tesla chips need to be more reliable is their use for general-purpose computing and not for graphics only. If there is a small fault on a graphics GPU (like the GeForce), and due to this fault a pixel in an image is blue instead of red, is not such a big of a deal. But for actual computations the user needs to have exact results without errors because the impact of a small arithmetic error can be huge in some later computations.

This claim comes up often in these discussions, but plenty of people in the forums (including myself) have been using the consumer cards with no problems for calculations. Overclocked and overheating cards can certainly develop a bit-flipping problem, but cards running at the NVIDIA spec generally seem to be just fine.

I’m not disputing this claim from NVIDIA employees, but so far there has been no verifiable or quantitative statement of the reliability of the Tesla vs. GeForce product lines. (I know that is hard to do. How does anyone verify the claimed MTBF or bit error rates on a hard disk?)

I have also heard such stories. But the thing is NVIDIA advertises TESLA as the face for HPC. So, if you have a reliability problem with TESLA, You can talk to NVIDIA to resolve it. But with GEForce, they can wash off hands saying it was done for graphics and that they cant really do anything about it.

So, if u r gonna use it for business with full fool-proof support, it is better you stick with TESLA.

Go to www.nvidia.com → Products menu → High Performance Computing —> It just shows the TESLAs.

Sure, but they don’t have a video output so even if they do support OpenGL/DirectX it’s kind of useless.

I’m not sure it is. I’ve been working on a code which included a gridder. The original C code was written as a scatter operation - hideously bad for threading with CUDA. I had to rewrite it to be a gather, and that was a bit painful. However, a year before, a student had worked on the code, and he knew OpenGL. This could access extra hardware on the GPU (the compositor I believe, but don’t hold me to that), which provided a thread safe accumulator. So he could keep the gridding operation as a simple scatter. I wish I could have just reused his code, but the reference program had changed in the meantime.