Error running nvidia-smi

I have a number of nodes that have the ERR! message while running nvidia-smi. That looks like a sensor error. I’m told that as a result, they can’t run a simple MPI/NCCL test. Is this error hardware or software related? This is running on POWER8 machine with P100 GPUs

| NVIDIA-SMI 418.67 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla P100-SXM2… On | 00000002:01:00.0 Off | 2 |
| N/A 36C P0 ERR! / 300W | 0MiB / 16280MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 1 Tesla P100-SXM2… On | 00000003:01:00.0 Off | 0 |
| N/A 36C P0 30W / 300W | 0MiB / 16280MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 2 Tesla P100-SXM2… On | 00000006:01:00.0 Off | 0 |
| N/A 30C P0 29W / 300W | 0MiB / 16280MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 3 Tesla P100-SXM2… On | 00000007:01:00.0 Off | 0 |
| N/A 27C P0 29W / 300W | 0MiB / 16280MiB | 0% Default |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
±----------------------------------------------------------------------------+