Tesla M60 for cuda computation without GRID?


we are going to buy a workstation for an upcoming scientific project, which is designed to compute some specific FEM simulations with CUDA in parallel on its GPU. We were looking for appropriate GPU’s and found the Tesla M60. It seems that the M60 is designed to be used in a server to provide GPU computation power to virtual machines, but has also pretty good ‘CUDA performance’: 7.4 TFLOPS / 4096 CUDA cores compared to e.g. (an alternative fitting in our budget) Tesla P100 with 4.7 TFLOPS / 3584 cores. We are not going to use it as a server to provide any virtual machines – just in a ‘usual’ workstation.

My questions:

  1. Is it possible to use the Tesla M60 for CUDA computations without buying any additional licenses for software like GRID? It seems like it is possible but I didn’t found a reliable answer so fare.

  2. Is there any drawback we should have in mind, because it is originally designed for GRID? Is there any reason to not use it in a usual workstation without GRID/server stuff etc.?

Thank you very much!

M60 is passive cooled and won’t run reliable in a workstation. You should look at GP100 or GV100. In addition the P100 is much faster than the M60. Your research was wrong. You took the double precision performance from P100. 9,3TF is the right comparison. As I said, look at GP100 oder GV100 for your use case.


Simon, Thank you very much! I was searching pretty long for exactly this information!

…btw, I was looking for double precision performance since we will manly use double precision operations. My mistake was to take the single prec value for the m60. Double precision performance of m60 is indeed worse than GP100 performance - yes, next reason to not take an m60 (even if it would be possible).

Thanks again!