Incorrect compute capability for Quadro M1200

I’m not sure if this is the correct place to post this, but there are a number of members of my team who are having issues with using the Nvidia Quadro M1200 GPU. When we run the CUDA 9.1 toolkit, it reports that the GPU has a compute capability of 5.0, but the official Nvidia documentation clearly states that the compute capability for this GPU is 5.2: https://developer.nvidia.com/cuda-gpus
So, this prevents us from running the latest version of PyTorch, which was the entire reason we purchased these laptops.
Is this a bug in the CUDA toolkit or somewhere else? Or is it false advertising?

What is “it” specifically? The deviceQuery sample application?

What is “this” specifically? What are the exact error message(s) are you getting from PyTorch? Are these comple-time or run-time messages?

A properly built CUDA application should not care whether a GPU has compute capability 5.0 or 5.2. I am not aware of any architectural differences between these visible at the CUDA level (I think there may be a few differences in the size of some caches, but that should be transparent at the CUDA level).

When in doubt, I would use the compute capability reported by deviceQuery. For what it is worth, the Wikipedia entry on Quadro lists the M1200 as a CC 5.0 device.

I ran deviceQuery and got the report with compute capability equal to 5.0 for my Quadro M1200 GPU. However, on the official NVIDIA website https://developer.nvidia.com/cuda-gpus, it says Quadro M1200 has 5.2 compute capability.

It is in the nature of documentation that it can be wrong on occasion (there is no automated testing of documentation, for example). Any instances of documentation errors can be reported to NVIDIA via the normal bug reporting channel, i.e. the web form linked from the registered developer website.

In OP’s case it is not clear how this small discrepancy in stated compute capability is preventing them from running PyTorch. Best I can tell from a quick internet search, PyTorch supports all GPUs with compute capability >= 3.0 (http://pytorch.org/docs/stable/torch.html)

Thanks for pointing out the issues in the documentation. This appears to be rectified now.

Regarding pytorch, you should be able to use binaries of 0.3.0 version, or else build your own binaries from source on later versions:

https://discuss.pytorch.org/t/pytorch-no-longer-supports-this-gpu-because-it-is-too-old/13803/5

The list entry for the Quadro M1200 has indeed been fixed to say CC 5.0. However, best I know the Quadro M620 on the same list is based on the same GPU as the Quadro M1200 (GM107) but still shows as CC 5.2.

That’s a pretty bad mistake with the documentation… It’s like buying a truck because the manufacturer said that it has “all wheel drive” only to discover that it actually doesn’t have that feature at all.

What feature of compute capability 5.2 does your application (or any application, for that matter) rely on that is absent from compute capability 5.0?

I like car analogies too, and as far as differences between 5.0 and 5.2 go, it seems more a case of a car listed (because of a typo) with 190 hp which turns out to have only 180 hp, with no functional differences.

Yeah, except where only vehicles with 190 hp or more are allowed to participate in a tournament that you intended to compete in with the vehicle.

I have no idea what that means. Per its documentation, PyTorch supports all GPUs with compute capability >= 3.0, so GPUs with compute capabilities 5.0 and 5.2 are both supported.

PyTorch recently stopped supporting GPUs with compute capability under 5.2.

Color me surprised. I did an internet search, and I believe my Google-fu ist pretty good, but could not find any indication that this occurred. Could you provide a relevant link? What reason did the developers give for this new restriction? I am interested in their reasoning, as I am not aware of any functional differences between 5.0 and 5.2.

In any event I suspect that any such restrictions apply to pre-built binaries only, not if you build from source as is the custom in the FOSS world.

[Later:] I think I found a relevant thread by searching for your name :-)
https://discuss.pytorch.org/t/pytorch-no-longer-supports-this-gpu-because-it-is-too-old/13803/18

The reason given by the PyTorch developer for dropping support for “older” compute capabilities frankly is asinine, in my not so humble opinion. But the thread also contains information on how to build the software yourself, so you should be good.

I stand corrected: https://discuss.pytorch.org/t/pytorch-no-longer-supports-this-gpu-because-it-is-too-old/13803/21

PyTorch is bringing back support for compute capability 5.0.

Good to hear the PyTorch developers have come to their senses :-)