CUDA devices flapping on/off (NVRM: rm_init_adapter failed for device) - POWER8/CUDA 8.0/CentOS 7.2

We’ve installed CentOS 7 onto a POWER8 system with K80’s, added CUDA 8.0 using the following:

wget https://developer.nvidia.com/compute/cuda/8.0/prod/local_installers/cuda-repo-rhel7-8-0-local-ga2-8.0.54-1.ppc64le-rpm

mv cuda-repo-rhel7-8-0-local-ga2-8.0.54-1.ppc64le-rpm cuda-repo-rhel7-8-0-local-ga2-8.0.54-1.ppc64le.rpm

yum install cuda-repo-rhel7-8-0-local-ga2-8.0.54-1.ppc64le.rpm

yum clean expire-cache

yum install cuda

The CUDA devices seem to flap on and off, generating the following errors:

[root@server ~]# nvidia-smi --list-gpus
GPU 0: Tesla K80 (UUID: GPU-7ac3e9f2-0ae2-f989-6604-35270c7d7206)
GPU 1: Tesla K80 (UUID: GPU-865025f8-05e1-d545-451b-1c3afb4ee48f)
GPU 2: Tesla K80 (UUID: GPU-e2431a3b-9d84-2f69-c0f2-6d5f5621f9c8)
GPU 3: Tesla K80 (UUID: GPU-6fb9172e-cfe3-7791-e229-ae6f2ae6d7d2)

[root@server ~]# nvidia-smi --list-gpus
[ 1207.101961] NVRM: DMA address not in addressable range of device 0002:03:00 (0x800017f49470000-0x800017f4947ffff, 0x800000000000000-0x80000ffffffffff)
[ 1207.103772] NVRM: RmInitAdapter failed! (0x24:0x1e:1048)
[ 1207.103826] NVRM: rm_init_adapter failed for device bearing minor number 0
[ 1207.370848] NVRM: DMA address not in addressable range of device 0002:04:00 (0x800017f45500000-0x800017f4550ffff, 0x800000000000000-0x80000ffffffffff)
[ 1207.372416] NVRM: RmInitAdapter failed! (0x24:0x1e:1048)
[ 1207.372468] NVRM: rm_init_adapter failed for device bearing minor number 1
[ 1207.631526] NVRM: DMA address not in addressable range of device 0006:03:00 (0x800017f46bc0000-0x800017f46bcffff, 0x800000000000000-0x80000ffffffffff)
[ 1207.633272] NVRM: RmInitAdapter failed! (0x24:0x1e:1048)
[ 1207.633324] NVRM: rm_init_adapter failed for device bearing minor number 2
[ 1207.892832] NVRM: DMA address not in addressable range of device 0006:04:00 (0x800017f46d10000-0x800017f46d1ffff, 0x800000000000000-0x80000ffffffffff)
[ 1207.894799] NVRM: RmInitAdapter failed! (0x24:0x1e:1048)
[ 1207.894853] NVRM: rm_init_adapter failed for device bearing minor number 3
No devices found.

[root@server ~]# nvidia-smi --list-gpus
[ 1214.742390] NVRM: DMA address not in addressable range of device 0006:03:00 (0x800017f4b970000-0x800017f4b97ffff, 0x800000000000000-0x80000ffffffffff)
[ 1214.743800] NVRM: RmInitAdapter failed! (0x24:0x1e:1048)
[ 1214.743856] NVRM: rm_init_adapter failed for device bearing minor number 2
[ 1215.003872] NVRM: DMA address not in addressable range of device 0006:04:00 (0x800017f466f0000-0x800017f466fffff, 0x800000000000000-0x80000ffffffffff)
[ 1215.005835] NVRM: RmInitAdapter failed! (0x24:0x1e:1048)
[ 1215.005890] NVRM: rm_init_adapter failed for device bearing minor number 3
GPU 0: Tesla K80 (UUID: GPU-7ac3e9f2-0ae2-f989-6604-35270c7d7206)
GPU 1: Tesla K80 (UUID: GPU-865025f8-05e1-d545-451b-1c3afb4ee48f)

[root@server ~]# nvidia-smi --list-gpus
[ 1225.376805] NVRM: DMA address not in addressable range of device 0006:04:00 (0x800017f466d0000-0x800017f466dffff, 0x800000000000000-0x80000ffffffffff)
[ 1225.379112] NVRM: RmInitAdapter failed! (0x24:0x1e:1048)
[ 1225.379170] NVRM: rm_init_adapter failed for device bearing minor number 3
GPU 0: Tesla K80 (UUID: GPU-7ac3e9f2-0ae2-f989-6604-35270c7d7206)
GPU 1: Tesla K80 (UUID: GPU-865025f8-05e1-d545-451b-1c3afb4ee48f)
GPU 2: Tesla K80 (UUID: GPU-e2431a3b-9d84-2f69-c0f2-6d5f5621f9c8)

[root@server ~]# nvidia-smi --list-gpus
GPU 0: Tesla K80 (UUID: GPU-7ac3e9f2-0ae2-f989-6604-35270c7d7206)
GPU 1: Tesla K80 (UUID: GPU-865025f8-05e1-d545-451b-1c3afb4ee48f)
GPU 2: Tesla K80 (UUID: GPU-e2431a3b-9d84-2f69-c0f2-6d5f5621f9c8)
GPU 3: Tesla K80 (UUID: GPU-6fb9172e-cfe3-7791-e229-ae6f2ae6d7d2)

Here is some additional information:

[root@server ~]# uname -r
3.10.0-514.el7.ppc64le

[root@server ~]# rpm -qa |grep -i cuda
cuda-cufft-8-0-8.0.54-1.ppc64le
cuda-nvrtc-dev-8-0-8.0.54-1.ppc64le
cuda-demo-suite-8-0-8.0.54-1.ppc64le
cuda-cublas-dev-8-0-8.0.54-1.ppc64le
cuda-core-8-0-8.0.54-1.ppc64le
cuda-visual-tools-8-0-8.0.54-1.ppc64le
cuda-misc-headers-8-0-8.0.54-1.ppc64le
cuda-cudart-8-0-8.0.54-1.ppc64le
cuda-curand-dev-8-0-8.0.54-1.ppc64le
cuda-cusolver-dev-8-0-8.0.54-1.ppc64le
cuda-drivers-361.107-1.ppc64le
cuda-documentation-8-0-8.0.54-1.ppc64le
cuda-nvgraph-8-0-8.0.54-1.ppc64le
cuda-cublas-8-0-8.0.54-1.ppc64le
cuda-8.0.54-1.ppc64le
cuda-repo-rhel7-8-0-local-ga2-8.0.54-1.ppc64le
cuda-driver-dev-8-0-8.0.54-1.ppc64le
cuda-npp-dev-8-0-8.0.54-1.ppc64le
cuda-cufft-dev-8-0-8.0.54-1.ppc64le
cuda-command-line-tools-8-0-8.0.54-1.ppc64le
cuda-cusparse-dev-8-0-8.0.54-1.ppc64le
cuda-curand-8-0-8.0.54-1.ppc64le
cuda-cusolver-8-0-8.0.54-1.ppc64le
cuda-8-0-8.0.54-1.ppc64le
cuda-license-8-0-8.0.54-1.ppc64le
cuda-cusparse-8-0-8.0.54-1.ppc64le
cuda-npp-8-0-8.0.54-1.ppc64le
cuda-samples-8-0-8.0.54-1.ppc64le
cuda-toolkit-8-0-8.0.54-1.ppc64le
cuda-nvml-dev-8-0-8.0.54-1.ppc64le
cuda-cudart-dev-8-0-8.0.54-1.ppc64le
cuda-nvgraph-dev-8-0-8.0.54-1.ppc64le
cuda-nvrtc-8-0-8.0.54-1.ppc64le
cuda-runtime-8-0-8.0.54-1.ppc64le

Anyone seen this behavior or have ideas on how to fix?
nvidia-bug-report.log.gz (54.1 KB)