I am working on a research project on cuda accelerating some heavy computation. I am comparing my work with a paper, they used tesla c2050 to run their code. I finished my code and tested it on a very weak notebook nvidia card. Now, i am intend ing to buy a card. I was thinking about gtx 750 2Gb , according to the comparison with tesla c2050 , the 750 gtx is slightly better but why is tesla that much expenaive . Also if anyone here is working in research, do you think 750 will look so old and this would be a negative point in judging my research and paper. Should I buy 980 or 970 gtx instead? I am going to pay for it myself and I am from egypt so it is quiet costy here. Thanks a lot.
“…a negative point in judging my research and paper”
research is (supposed to be) objective
gpu cards can be cross-compared objectively
you should be able to show (argue) objectively that your choice of card would/ would not affect the outcome of your results
tesla cards are more high-end cards with extra bells and whistles
i would think ecc is one of them; i would think there are cluster-oriented features too
others may help you list the extras
Advantages of Tesla:
- 10 year warranty and support, which is need for GPUs used in medical or other critical systems
- GPU direct (RDMA) which allows for data to be piped directly into the GPU without the involvement of the PCI-E bus. Very useful for big-data real time applications
- Better 64 bit performance (when compared to most GTX GPUs)
- ECC error correction
- Longer term reliability due to lower clock speeds and (I think) better quality control
- Fast peer-to-peer GPU-to-GPU memory transfer
Advantages of GTX:
- Generally higher clock speeds and often come with extra cooling which allows for better performance
- Some models (Kepler GTX Titan and GTX Titan-Z) have superior 64 bit performance
I strongly recommend the GTX 980, as it is very popular, has great performance, is very reliable, and comes in many flavors. If you really need to save money then the GTX 780ti would be another good choice.
Do not buy the cheap GTX 980s (MSI or Zotac), spend the extra 20 bucks and get the EVGA ACX GTX 980.
Avoid the Quadros for compute, as they are mainly intended for CAD type work.
This list is generally accurate for compute;
Of all cc5.2 desktop GPUs, GTX 960, 970 and 980 Ti seem to have the best GFLOPS/$ value:
As CudaaduC has mentioned, you can go with their corresponding aftermarket variants with usually less than 5% extra cost and 10~15% better performance and better cooling.
(But if you can afford to wait a year or so, Pascal cards are probably going to blow everything out of the water with massively better compute power, memory bandwidth and memory size.)
from CudaaduC’s elaborate list, i would conclude that, if your work does not focus on a particular hardware feature, but is rather software - algorithmic or mathematically - oriented, that only ecc really poses the greatest threat in terms of your results, compared to that attained on a tesla card
and by noting the probability of ecc-related errors, and the probability that it would persist across multiple, identical runs, ecc as threat can easily be accounted for, and thus argued away
particularly for (some of the) first trial runs, gtx cards may very well suffice for research purposes