Monitor GPU usage with nvidia-smi

I tried to run tensorflow-gpu on Ubuntu 18.04 LTS with a Geforce 650 GPU. The installation process went just fine and it is possible to use tensorflow in my virtual environment. However I am not able to monitor the GPU usage with nvidia-smi command except for the overall temperature and memory usage. However it would be nice to see each process running on the GPU.

The output from the nvidia-smi command without running a tensorflow process is as follows:

Thu Sep 27 12:43:54 2018
| NVIDIA-SMI 410.48 Driver Version: 410.48 |
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| 0 GeForce GTX 650 Off | 00000000:02:00.0 N/A | N/A |
| 21% 41C P0 N/A / N/A | 816MiB / 979MiB | N/A Default |

| Processes: GPU Memory |
| GPU PID Type Process name Usage |
| 0 Not Supported |

Is there any other way to monitor the GPU usage?


And it’s not just nvidia-smi: the GPU page of nvidia-settings always lists:

GPU Utilization: 0%
Video Engine Utilization: 0%
PCIe Bandwidth Utilization: 0%

regardless of the actual GPU usage…

I see this with any of my 8800GT, GTX 460, GTX 660 or GTX 970 cards and with all NVIDIA driver versions I ever ran on them.

I suspect the vendor of your cards didn’t buy a license for that feature.

That would mean 3 different vendors then, and not the small ones…

At least this seems to be a common problem. Still is there any solution around this?

Some cards even fail to show anything, some have more duds than others. The way around that might be NVAPI
Unfortunately, only available for Windows, docs for most parts require NDA.