NVIDIA on Arch, Power Managerment and Compute Processes

Hi, recently I installed Arch on my computer and updated with the newest Nvidia driver. However, when I type “nvidia-smi”, the information is showed below. I noticed that:

[1] The Pwr:Usage/Cap (power managerment) on GTX 285 and Tesla C2070 become N/A, but that on Tesla C2070 is still normal.
[2] The “compute processes” become “not supported”.

I am very struggled about it. Can anyone here help me out please?

Thanks.

±-----------------------------------------------------+
| NVIDIA-SMI 337.12 Driver Version: 337.12 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 285 Off | 0000:02:00.0 N/A | N/A |
| 40% 56C N/A N/A / N/A | 3MiB / 2047MiB | N/A Default |
±------------------------------±---------------------±---------------------+
| 1 Tesla C2075 Off | 0000:03:00.0 Off | 0 |
| 30% 51C P0 80W / 225W | 9MiB / 5375MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 2 Tesla C2070 Off | 0000:84:00.0 Off | 0 |
| 30% 56C P0 N/A / N/A | 9MiB / 5375MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 3 Tesla C2075 Off | 0000:85:00.0 Off | 0 |
| 30% 56C P0 81W / 225W | 9MiB / 5375MiB | 0% Default |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Compute processes: GPU Memory |
| GPU PID Process name Usage |
|=============================================================================|
| 0 Not Supported |
±----------------------------------------------------------------------------+

Some time ago you could see all kind of helpfull information for GeForce cards with nvidia-smi. I don’t know if NVIDIA removed this intentionally or if it is just a bug in the NVML libray, since you can get your data if you have your GeForce cards identify themselves as supported cards to this library.

“This workaround is a shim that sits between the program trying to use NVML (Ganglia plugin, pyNVML, nvidia-smi, …) and the actual NVML library itself. Whenever a device handle is requested from NVML, the shim flips an internal “supported” flag before returning it to the hosted program. Therefore, when the handle is used in subsequent calls to the library, NVML correctly sees that the device is in fact supported, and returns information properly.”

You’ll also find it in the Arch User Repo. Unfortunatelly it has not been updated in a while. Maybe you can adapt it to support the latest driver. Would be a cool weekend project.

I really wish it would not require any hacks to get all of the cool data back. I’d love to decorate my desktop with it through conky. Now that we have overclocking back in the Linux driver it would also be very handy to be able to keep a eye on the core voltage. Right now you don’t know if and how much the driver raises the voltage when you increase the clock speeds in nvidia-settings.