Compatibility of NVLink bridges

That won’t work at all!
It’s not possible to mix NVLINK bridges from different GPU generations.
Each GPU architecture comes with a specific matching bridge logic.

Normally Pascal bridges are silver, Volta bridges are golden, Turing bridges are silver + green.
Quadro bridges come in 2-slot and 3-slot widths.
GeForce bridges come in 3-slot and 4-slot widths.
GeForce and Quadro bridges must also not be mixed.
The GV100 must have both its NVLINK bridges installed.

Current CUDA 10 drivers allow peer-to-peer access also under Windows 10 (WDDM2), but only when SLI is enabled inside the NVIDIA Control Panel.
Experience shows that this is only available when installing the NVLINK boards to identical PCI-E slots, means both should be x16 electrical, x16 + x8 won’t work. They should also be connected to the same CPU.
CPUs with only 28 PCI-E lanes for example can’t fulfill that.

Under Tesla Compute Cluster (TCC) driver mode, peer-to-peer has always worked, also under Linux. The SLI state would not matter then. GeForce boards do not support TCC mode.

Depending on the workstation motherboard type, installing the graphics boards in an NVLINK setup requires the specific bridge widths matching your PCI-E slot configuration, e.g. like these for Quadro.
https://www.nvidia.com/en-us/design-visualization/nvlink-bridges/
Notice the different bandwidths! Even RTX5000 and RTX6000/8000 require different bridges.

Similarly for consumer motherboards which normally have a different PCI-E lane layout which requires the wider bridges.