I currently have a set up with 2 GPUs on the same PCIe switch. I confirmed this by doing nvidia-smi topo -m and “PIX” is the connection between the 2 GPUs.
GPU0 GPU1 CPU Affinity GPU0 X PIX 0-17 GPU1 PIX X 0-17 Legend: X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe switches (without traversing the PCIe Host Bridge) PIX = Connection traversing a single PCIe switch NV# = Connection traversing a bonded set of # NVLinks
I then build and run simpleP2P from the CUDA samples.
[./simpleP2P] - Starting... Checking for multiple GPUs... CUDA-capable device count: 2 > GPU0 = "Tesla V100-PCIE-32GB" IS capable of Peer-to-Peer (P2P) > GPU1 = "Tesla V100-PCIE-32GB" IS capable of Peer-to-Peer (P2P) Checking GPU(s) for support of peer to peer memory access... > Peer access from Tesla V100-PCIE-32GB (GPU0) -> Tesla V100-PCIE-32GB (GPU1) : Yes > Peer access from Tesla V100-PCIE-32GB (GPU1) -> Tesla V100-PCIE-32GB (GPU0) : Yes Enabling peer access between GPU0 and GPU1... Checking GPU0 and GPU1 for UVA capabilities... > Tesla V100-PCIE-32GB (GPU0) supports UVA: Yes > Tesla V100-PCIE-32GB (GPU1) supports UVA: Yes Both GPUs can support UVA, enabling... Allocating buffers (64MB on GPU0, GPU1 and CPU Host)... Creating event handles... cudaMemcpyPeer / cudaMemcpy between GPU0 and GPU1: 1.05GB/s Preparing host buffer and memcpy to GPU0... Run kernel on GPU1, taking source data from GPU0 and writing to GPU1... Run kernel on GPU0, taking source data from GPU1 and writing to GPU0... Copy data back to host from GPU0 and verify results... Verification error @ element 0: val = nan, ref = 0.000000 Verification error @ element 1: val = nan, ref = 4.000000 Verification error @ element 2: val = nan, ref = 8.000000 Verification error @ element 3: val = nan, ref = 12.000000 Verification error @ element 4: val = nan, ref = 16.000000 Verification error @ element 5: val = nan, ref = 20.000000 Verification error @ element 6: val = nan, ref = 24.000000 Verification error @ element 7: val = nan, ref = 28.000000 Verification error @ element 8: val = nan, ref = 32.000000 Verification error @ element 9: val = nan, ref = 36.000000 Verification error @ element 10: val = nan, ref = 40.000000 Verification error @ element 11: val = nan, ref = 44.000000 Disabling peer access... Shutting down... Test failed!
So everything looks good, no CUDA errors show up or anything, but in the end no data was actually transferred and the verification fails. What could be causing this problem? When I move a GPU to a different slot where the connection is a “NODE” connection according to nvidia-smi topo the transfer is able to work.