understanding the CUDA CUDA Toolkit Number and the CUDA toolkit version

Hi,

I’m having a hard time understanding what I’ll need to run Tensor flow economically.

I was looking at older cards like the Tesla M2090 and S1070.

Tensor flow needs lots of RAM I’ve heard and these cards have 5 to 6 GB each – they can be had off eBay for $100 to $200. I have a server rack space so power is not an issue.

I figures I could use multiple S1070 cards as opposed to say a GTX 750.

Now I can see that Tensor Flow requires CUDA Toolkit 7.5+

However when I look at the Wikipedia page https://en.wikipedia.org/wiki/CUDA it says the following:

S1070 supports Compute ability (version) 1.3
M2090 supports Compute ability (version) 2.0
GTX 950 supports Compute ability (version) 5.2

I can seem to understand the mapping between Compute ability (version) <–> CUDA Toolkit

Would appreciate any comments on how to figure this out.

Thanks.

Compute capability is a property of NVIDIA GPUs. Modern versions of CUDA, and this includes CUDA 7.5 as the latest non-experimental shipping version, require GPUs with device capability >= 2.0.

The Wikipedia page you referenced has a handy table that lists GPUs ordered by compute capability. Note that some CUDA-enabled applications (and I believe this includes some deep learning packages!) require a compute capability higher than 2.0, so I would suggest checking requirements carefully.

[Later:] The TensorFlow website (https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html) actually states:

“TensorFlow GPU support requires having a GPU card with NVidia Compute Capability >= 3.0.”

This means you cannot use an S1070 or an M2090 for GPU acceleration of TensorFlow.

@njuffa,

Many thanks for the clarification – this helps a lot :-)

@njuffa,

Based on your advise we’re considering and looking at this wikipedia page https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units

A. 2x GeForce GTX Titan Z
B. 4x NVIDIA GeForce GTX TITAN

These seem to be more attractive than say Tesla M4, M40, M60, etc. or am I missing something here.

Thanks.

I do not have any experience with TensorFlow, and I do not have experience with any of the GPUs on your list, so I am afraid I am unable to comment in detail on your part selection process.

Best I know, both Titan and Titan Z are only available on the second-hand market at this point. Depending on your geographical location, and the intended usage pattern and duration, you may want to consider not only original purchase price but also operating costs (electricity).

@njuffa,

Good point … we’re in the USA and are looking at the boards on eBay.

On Electricity costs – we kinda covered on that front because we’re in a Data Center and we have plenty of power as part of the contract.

Thanks again.