Hi Detlef,
Thank you for the confirmation that this setup is not officially supported.
It is also unfortunate to know the limitations of using NVLINK, such as “both should be x16 electrical”. From your website below, it is not obvious about this limitation. It seems, only very high end Intel CPU supports 32+ PCIE lanes.
https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080-ti/
I have tested my “hacky” setup any way. The result is amazing!
On Linux, with NVIDIA Driver 430.14:
From CUDA 10.0 - p2pBandwidthLatencyTest:
[P2P (Peer-to-Peer) GPU Bandwidth Latency Test]
Device: 0, GeForce RTX 2080 Ti, pciBusID: 17, pciDeviceID: 0, pciDomainID:0
Device: 1, GeForce RTX 2080 Ti, pciBusID: 65, pciDeviceID: 0, pciDomainID:0
Device=0 CAN Access Peer Device=1
Device=1 CAN Access Peer Device=0
NOTE: In case a device doesn't have P2P access to other one, it falls back to normal memcopy procedure.
So you can see lesser Bandwidth (GB/s) and unstable Latency (us) in those cases.
P2P Connectivity Matrix
D\D 0 1
0 1 1
1 1 1
Unidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1
0 529.28 4.02
1 3.96 530.92
Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)
D\D 0 1
0 531.85 46.93
1 46.98 530.74
Bidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1
0 535.48 7.39
1 7.39 534.86
Bidirectional P2P=Enabled Bandwidth Matrix (GB/s)
D\D 0 1
0 531.27 93.54
1 93.72 532.86
P2P=Disabled Latency Matrix (us)
GPU 0 1
0 1.58 11.43
1 11.37 1.92
CPU 0 1
0 2.91 6.99
1 7.00 2.73
P2P=Enabled Latency (P2P Writes) Matrix (us)
GPU 0 1
0 1.58 1.61
1 1.76 1.92
CPU 0 1
0 3.11 1.98
1 1.98 2.80
From CUDA 10.0 - simpleP2P:
Checking for multiple GPUs...
CUDA-capable device count: 2
> GPU0 = "GeForce RTX 2080 Ti" IS capable of Peer-to-Peer (P2P)
> GPU1 = "GeForce RTX 2080 Ti" IS capable of Peer-to-Peer (P2P)
Checking GPU(s) for support of peer to peer memory access...
> Peer access from GeForce RTX 2080 Ti (GPU0) -> GeForce RTX 2080 Ti (GPU1) : Yes
> Peer access from GeForce RTX 2080 Ti (GPU1) -> GeForce RTX 2080 Ti (GPU0) : Yes
Enabling peer access between GPU0 and GPU1...
Checking GPU0 and GPU1 for UVA capabilities...
> GeForce RTX 2080 Ti (GPU0) supports UVA: Yes
> GeForce RTX 2080 Ti (GPU1) supports UVA: Yes
Both GPUs can support UVA, enabling...
Allocating buffers (64MB on GPU0, GPU1 and CPU Host)...
Creating event handles...
cudaMemcpyPeer / cudaMemcpy between GPU0 and GPU1: 43.58GB/s
Preparing host buffer and memcpy to GPU0...
Run kernel on GPU1, taking source data from GPU0 and writing to GPU1...
Run kernel on GPU0, taking source data from GPU1 and writing to GPU0...
Copy data back to host from GPU0 and verify results...
Disabling peer access...
Shutting down...
Test passed
To double check, I redo the test without NVLINK:
From CUDA 10.0 - p2pBandwidthLatencyTest:
Device: 0, GeForce RTX 2080 Ti, pciBusID: 17, pciDeviceID: 0, pciDomainID:0
Device: 1, GeForce RTX 2080 Ti, pciBusID: 65, pciDeviceID: 0, pciDomainID:0
Device=0 CANNOT Access Peer Device=1
Device=1 CANNOT Access Peer Device=0
NOTE: In case a device doesn't have P2P access to other one, it falls back to normal memcopy procedure.
So you can see lesser Bandwidth (GB/s) and unstable Latency (us) in those cases.
P2P Connectivity Matrix
D\D 0 1
0 1 0
1 0 1
Unidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1
0 527.87 4.03
1 3.97 530.74
Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)
D\D 0 1
0 531.26 4.02
1 3.98 532.01
Bidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1
0 530.90 7.65
1 7.64 536.53
Bidirectional P2P=Enabled Bandwidth Matrix (GB/s)
D\D 0 1
0 530.74 7.65
1 7.66 535.47
P2P=Disabled Latency Matrix (us)
GPU 0 1
0 1.27 15.05
1 11.38 1.91
CPU 0 1
0 3.03 6.88
1 6.82 3.06
P2P=Enabled Latency (P2P Writes) Matrix (us)
GPU 0 1
0 1.28 13.89
1 11.30 1.91
CPU 0 1
0 3.21 6.80
1 6.89 2.74
From CUDA 10.0 - simpleP2P:
[./simpleP2P] - Starting...
Checking for multiple GPUs...
CUDA-capable device count: 2
> GPU0 = "GeForce RTX 2080 Ti" IS capable of Peer-to-Peer (P2P)
> GPU1 = "GeForce RTX 2080 Ti" IS capable of Peer-to-Peer (P2P)
Checking GPU(s) for support of peer to peer memory access...
> Peer access from GeForce RTX 2080 Ti (GPU0) -> GeForce RTX 2080 Ti (GPU1) : No
> Peer access from GeForce RTX 2080 Ti (GPU1) -> GeForce RTX 2080 Ti (GPU0) : No
Two or more GPUs with SM 2.0 or higher capability are required for ./simpleP2P.
Peer to Peer access is not available amongst GPUs in the system, waiving test.
Maybe I meet a very specific case that happens to work with peer to peer.
Before the official setup is ready, maybe I could use this setup for OptiX, but I don’t know if OptiX supports this “compatible setup”. Do you have any suggestions I could try with ?
Thanks.