K40m CUDA Compatibility

Received some old cards from a friend and was curious what I could do with them. 2x old K40m that I put in my older Threadripper build. Was hoping to work on design or run some light modelling on them, but am not sure if they will be compatible with any of the newer CUDA toolkits. Any ideas?

The currently shipping CUDA version 11.4 still supports devices with compute capability 3.5 like the K40m. However, support for sm_35 is deprecated and may go away with any future CUDA version.

Is your “older Threadripper build” a server with sufficient airflow for two passively cooled GPUs with a specified power draw of 245W each?

Compared to modern GPUs from the Turing and Ampere families, the efficiency of the eight-year-old K40m is pretty abysmal, and it is primarily useful as a space heater, IMHO.

A K40m requires forced flow-through cooling. It also poses somewhat large resource requests to the system BIOS. They are really not designed to be installed in an arbitrary platform, and people who do so often run into difficulties. You can find stories about these challenges on these forums.

On airflow, actually yes - the threadripper itself has a top of the line cooler master 360mm radiator AIO. The case is a Dark Base Pro 900 v2, with 3 intakes in front and I added a 140MM in back with a 90mm Noctua full-speed directly pulling through the cards. Not a server, but pretty damn close.

Was thinking Linux, but seems like Windows might be easier to get up and running.

You are welcome to give it a try and report back here. You can monitor GPU temperature with nvidia-smi. I find Folding@Home useful as a rough quick test app when it comes to playing with the power consumption of compute apps.

Passively-cooled GPUs require an airflow with a specific direction and particular speed. I cannot find NVIDIA’s relevant specifications for the K40m right now. The kind of enclosure provided by a workstation or gaming system does not typically satisfy the requirements, even when powerful case fans are used.

The folding@home idea is great! I’ll see if I can get it running.

The most detailed public specifications I am aware of for such products would be the board specification, an example for K40 is here. However you’re not going to find detailed thermal specifications there. The intent is for the product to be installed in a qualified server. The server qualification process involves server-specific engineering which is undertaken at the point of qualification (e.g. thermal monitoring setup and fan curves) and is generally not published (at least, not published by NVIDIA). NVIDIA has essentially no intent to support the kind of usage proposed here.

It is entirely possible that the cooling specifications I remembered seeing were not publicly available ones but those shared with system integrators under NDA.

[Later:] The “TESLA P100 PCIe GPU Accelerator Product Brief”, publicly available from NVIDIA’s website, provides a table of required CFM (cubic feet per minute) depending on inlet temperature. A similar public document may have been available for the K40m in the past.

I am fully aware, and have pointed this out multiple times in these forums myself before, that Tesla products are designed exclusively for the use by NVIDIA-qualified system integrators, and that any “Frankenbuilds” using these GPUs that are created by third parties are entirely the responsibility of those third parties and that NVIDIA provides no support whatsoever (through this forum or otherwise) for such unapproved applications.

Lol, the entire thing is out of support at this point - gimme a break man

@PhobosRender I am just trying to set appropriate expectations while also trying to be helpful.