I have installed 2 P-100s in my machine but they could not access memory directly.
My GPUs are in the same CPU sockets
nvidia-smi topo -m
GPU0 GPU1 CPU Affinity
GPU0 X SOC 0-7,16-23
GPU1 SOC X 8-15,24-31
X = Self
SOC = Connection traversing PCIe as well as the SMP link between CPU sockets(e.g. QPI)
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe switches (without traversing the PCIe Host Bridge)
PIX = Connection traversing a single PCIe switch
NV# = Connection traversing a bonded set of # NVLinks
But no Peer access each other
[/usr/local/cuda/samples/0_Simple/simpleP2P/simpleP2P] - Starting…
Checking for multiple GPUs…
CUDA-capable device count: 2
GPU0 = “Tesla P100-PCIE-16GB” IS capable of Peer-to-Peer (P2P)
GPU1 = “Tesla P100-PCIE-16GB” IS capable of Peer-to-Peer (P2P)
Checking GPU(s) for support of peer to peer memory access…
Peer access from Tesla P100-PCIE-16GB (GPU0) -> Tesla P100-PCIE-16GB (GPU1) : No
Peer access from Tesla P100-PCIE-16GB (GPU1) -> Tesla P100-PCIE-16GB (GPU0) : No
Two or more GPUs with SM 2.0 or higher capability are required for /usr/local/cuda/samples/0_Simple/simpleP2P/simpleP2P.
Peer to Peer access is not available amongst GPUs in the system, waiving test.