I am logging into a remote server with 4 GPUs installed. I tried rebooting the server but $ nvidia-smi gives the same output as shown below.
I am not able to find other similar issues online. So I am not sure what to aim to fix the problem. Any help is appreciated!
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.67 Driver Version: 390.67 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla P100-PCIE... On | 00000000:04:00.0 Off | 0 |
| N/A 29C P0 24W / 250W | 0MiB / 12198MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla P100-PCIE... On | 00000000:05:00.0 Off | 0 |
| N/A 30C P0 24W / 250W | 0MiB / 12198MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 Tesla P100-PCIE... On | 00000000:88:00.0 Off | 0 |
| N/A 27C P0 24W / 250W | 0MiB / 12198MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 Tesla P100-PCIE... On | 00000000:89:00.0 Off | 0 |
| N/A 30C P0 25W / 250W | 0MiB / 12198MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+