Compatibility of NVLink bridges

Hi,

I already have a pair of NVLink GV100 bridge, and I have connected two RTX 2080Ti with one of these bridges. My question is, can I get the best performance/bandwidth with such a setup (GV100 bridge + RTX 2080Ti x2) ?

From the link below, it is recommended to use a different and much cheaper NVLink bridge for RTX cards:
[url]https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080-ti/[/url]

Form the link below, there is only bandwidth specification for different bridges.
[url]https://www.nvidia.com/en-us/design-visualization/quadro-store/#[/url]

Here comes an additional question: What are the differences between these bridges ? Why the prices are different so much ?

Thanks.

That won’t work at all!
It’s not possible to mix NVLINK bridges from different GPU generations.
Each GPU architecture comes with a specific matching bridge logic.

Normally Pascal bridges are silver, Volta bridges are golden, Turing bridges are silver + green.
Quadro bridges come in 2-slot and 3-slot widths.
GeForce bridges come in 3-slot and 4-slot widths.
GeForce and Quadro bridges must also not be mixed.
The GV100 must have both its NVLINK bridges installed.

Current CUDA 10 drivers allow peer-to-peer access also under Windows 10 (WDDM2), but only when SLI is enabled inside the NVIDIA Control Panel.
Experience shows that this is only available when installing the NVLINK boards to identical PCI-E slots, means both should be x16 electrical, x16 + x8 won’t work. They should also be connected to the same CPU.
CPUs with only 28 PCI-E lanes for example can’t fulfill that.

Under Tesla Compute Cluster (TCC) driver mode, peer-to-peer has always worked, also under Linux. The SLI state would not matter then. GeForce boards do not support TCC mode.

Depending on the workstation motherboard type, installing the graphics boards in an NVLINK setup requires the specific bridge widths matching your PCI-E slot configuration, e.g. like these for Quadro.
https://www.nvidia.com/en-us/design-visualization/nvlink-bridges/
Notice the different bandwidths! Even RTX5000 and RTX6000/8000 require different bridges.

Similarly for consumer motherboards which normally have a different PCI-E lane layout which requires the wider bridges.

Hi Detlef,

Thank you for the confirmation that this setup is not officially supported.

It is also unfortunate to know the limitations of using NVLINK, such as “both should be x16 electrical”. From your website below, it is not obvious about this limitation. It seems, only very high end Intel CPU supports 32+ PCIE lanes.
https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080-ti/

I have tested my “hacky” setup any way. The result is amazing!
On Linux, with NVIDIA Driver 430.14:

From CUDA 10.0 - p2pBandwidthLatencyTest:

[P2P (Peer-to-Peer) GPU Bandwidth Latency Test]
Device: 0, GeForce RTX 2080 Ti, pciBusID: 17, pciDeviceID: 0, pciDomainID:0
Device: 1, GeForce RTX 2080 Ti, pciBusID: 65, pciDeviceID: 0, pciDomainID:0
Device=0 CAN Access Peer Device=1
Device=1 CAN Access Peer Device=0

NOTE: In case a device doesn't have P2P access to other one, it falls back to normal memcopy procedure.
So you can see lesser Bandwidth (GB/s) and unstable Latency (us) in those cases.

P2P Connectivity Matrix
     D\D     0     1
     0       1     1
     1       1     1
Unidirectional P2P=Disabled Bandwidth Matrix (GB/s)
   D\D     0      1
     0 529.28   4.02
     1   3.96 530.92
Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)
   D\D     0      1
     0 531.85  46.93
     1  46.98 530.74
Bidirectional P2P=Disabled Bandwidth Matrix (GB/s)
   D\D     0      1
     0 535.48   7.39
     1   7.39 534.86
Bidirectional P2P=Enabled Bandwidth Matrix (GB/s)
   D\D     0      1
     0 531.27  93.54
     1  93.72 532.86
P2P=Disabled Latency Matrix (us)
   GPU     0      1
     0   1.58  11.43
     1  11.37   1.92

   CPU     0      1
     0   2.91   6.99
     1   7.00   2.73
P2P=Enabled Latency (P2P Writes) Matrix (us)
   GPU     0      1
     0   1.58   1.61
     1   1.76   1.92

   CPU     0      1
     0   3.11   1.98
     1   1.98   2.80

From CUDA 10.0 - simpleP2P:

Checking for multiple GPUs...
CUDA-capable device count: 2
> GPU0 = "GeForce RTX 2080 Ti" IS  capable of Peer-to-Peer (P2P)
> GPU1 = "GeForce RTX 2080 Ti" IS  capable of Peer-to-Peer (P2P)

Checking GPU(s) for support of peer to peer memory access...
> Peer access from GeForce RTX 2080 Ti (GPU0) -> GeForce RTX 2080 Ti (GPU1) : Yes
> Peer access from GeForce RTX 2080 Ti (GPU1) -> GeForce RTX 2080 Ti (GPU0) : Yes
Enabling peer access between GPU0 and GPU1...
Checking GPU0 and GPU1 for UVA capabilities...
> GeForce RTX 2080 Ti (GPU0) supports UVA: Yes
> GeForce RTX 2080 Ti (GPU1) supports UVA: Yes
Both GPUs can support UVA, enabling...
Allocating buffers (64MB on GPU0, GPU1 and CPU Host)...
Creating event handles...
cudaMemcpyPeer / cudaMemcpy between GPU0 and GPU1: 43.58GB/s
Preparing host buffer and memcpy to GPU0...
Run kernel on GPU1, taking source data from GPU0 and writing to GPU1...
Run kernel on GPU0, taking source data from GPU1 and writing to GPU0...
Copy data back to host from GPU0 and verify results...
Disabling peer access...
Shutting down...
Test passed

To double check, I redo the test without NVLINK:

From CUDA 10.0 - p2pBandwidthLatencyTest:

Device: 0, GeForce RTX 2080 Ti, pciBusID: 17, pciDeviceID: 0, pciDomainID:0
Device: 1, GeForce RTX 2080 Ti, pciBusID: 65, pciDeviceID: 0, pciDomainID:0
Device=0 CANNOT Access Peer Device=1
Device=1 CANNOT Access Peer Device=0

NOTE: In case a device doesn't have P2P access to other one, it falls back to normal memcopy procedure.
So you can see lesser Bandwidth (GB/s) and unstable Latency (us) in those cases.

P2P Connectivity Matrix
     D\D     0     1
     0       1     0
     1       0     1
Unidirectional P2P=Disabled Bandwidth Matrix (GB/s)
   D\D     0      1
     0 527.87   4.03
     1   3.97 530.74
Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)
   D\D     0      1
     0 531.26   4.02
     1   3.98 532.01
Bidirectional P2P=Disabled Bandwidth Matrix (GB/s)
   D\D     0      1
     0 530.90   7.65
     1   7.64 536.53
Bidirectional P2P=Enabled Bandwidth Matrix (GB/s)
   D\D     0      1
     0 530.74   7.65
     1   7.66 535.47
P2P=Disabled Latency Matrix (us)
   GPU     0      1
     0   1.27  15.05
     1  11.38   1.91

   CPU     0      1
     0   3.03   6.88
     1   6.82   3.06
P2P=Enabled Latency (P2P Writes) Matrix (us)
   GPU     0      1
     0   1.28  13.89
     1  11.30   1.91

   CPU     0      1
     0   3.21   6.80
     1   6.89   2.74

From CUDA 10.0 - simpleP2P:

[./simpleP2P] - Starting...
Checking for multiple GPUs...
CUDA-capable device count: 2
> GPU0 = "GeForce RTX 2080 Ti" IS  capable of Peer-to-Peer (P2P)
> GPU1 = "GeForce RTX 2080 Ti" IS  capable of Peer-to-Peer (P2P)

Checking GPU(s) for support of peer to peer memory access...
> Peer access from GeForce RTX 2080 Ti (GPU0) -> GeForce RTX 2080 Ti (GPU1) : No
> Peer access from GeForce RTX 2080 Ti (GPU1) -> GeForce RTX 2080 Ti (GPU0) : No
Two or more GPUs with SM 2.0 or higher capability are required for ./simpleP2P.
Peer to Peer access is not available amongst GPUs in the system, waiving test.

Maybe I meet a very specific case that happens to work with peer to peer.
Before the official setup is ready, maybe I could use this setup for OptiX, but I don’t know if OptiX supports this “compatible setup”. Do you have any suggestions I could try with ?

Thanks.

I said the SLI enable feature in the Windows 10 NVIDIA CPL most likely won’t work for the asymmetrical PCI-E layout. We’ve seen that here at least twice. I’m only using Quadro boards and always have them in x16 lane slots, everything else doesn’t make sense for workstation performance related work. I have no idea what happens in TCC mode or under Linux.

There is simply zero guarantee that your experiment will work reliably.