Hey there! I’ve already posted in the NVML forum about this, but I’ll post here as well to increase visibility…
The problem with nvidia-smi is actually a bug in NVML, where it incorrectly detects that a graphics card is unsupported, when it actually is. I’ve created a workaround for this, you can find it here:
[url]https://github.com/CFSworks/nvml_fix[/url]
To use it, you should be on either the 325.08 or 319.32 drivers. Build the proper version for your driver:
$ make 325.08
Then copy the built library over your libnvidia-ml.so.1:
rm /usr/lib/libnvidia-ml.so.1
cp built/325.08/libnvidia-ml.so.1 /usr/lib
This should fix the bug, and allow nvidia-smi to report all information on your GPU.
Neat tidbit I just noticed… it seems on newer driver revisions (I am on 331.38) nvidia-settings has additional attributes added that display most of the same parameters that nvidia-smi can output with a fully supported GPU, regardless of GPU type.
For example:
This is very useful for anyone that wants to monitor real-time clocks in Linux… you can certainly script whichever variables you wish… For a list of all available parameters:
I believe the only drawback is that an X server needs to be running to access these parameters. (as opposed to nvidia-smi)
For example, here is a quick Linux way to dump the GPU clocks as an infinite loop to a txt file that can be used to average clocks in a spreadsheet after the fact. In my case, I have 3 GPU’s in my system, ergo the ‘gpu:2’ grep. (count starts from 0)