I am in the early stages of putting together computing requirements for a new system we are building. I am trying to put together the requirements on the GPU(s) as part of the overall computing resources in the new system. The reason I need GPUs is to perform scientific computing (image and signal processing) on large data sets millions of pixels each in a real time environment where high throughput is required. I am at loss as to which card to choose. Here are additional specific questions:
0- Programmer productivity/learning curve. This is really important; if an older model doesn’t allow us to use the latest simplifications/abstractions then it is not a good choice since developer productivity is a lot more expensive.
1- Compute Capability. Do I really need 5? or is 3.5 sufficient?
2- if I went with 5, do I loose the ability to use any GPU libraries? or will they just run slower?
3- I guess Nvidia makes cards that designed FOR GPU computing, like Tesla, and generic ones for Gaming (although they are CUDA-capable). is there a difference between how these two types are connected to the Host CPU? Bandwidth? I know keep the GPU “fed” with data is the biggest difficulty for high throughput applications (that’s what I heard, I guess I’ll find out, but not the hard way I hope).
The problem with just getting the latest and greatest is NOT price, but power consumption which, given other constraints, needs to be kept in check.
Please provide some advice, pointers on how to go about selecting a GPU for my scientific computing , high throughput application.