We have a four-GPU workstation: two RTX 2080 Ti’s and two Quadro RTX 6000’s. We have two brand new NVLink bridges, one to bridge the two 2080TI’s together and the second to bridge the two Quadro’s together. Individually, each of the four GPU’s seems to be working just fine. However, when we installed the NVLink bridges none of the links are activating. To confirm this we run the command “nvidia-smi nvlink -s” in the terminal. Which shows that all of the NVLink bridges are inactive. The workstation is dual boot (Ubuntu-Linux 18.04 and Windows 10 Pro for Workstations). We mostly use the Ubuntu OS for our needs. Nevertheless, the NVLink bridges don’t seem to be working in either OS. Additionally, the NVidia driver is version 440 installed for both OS’s.
Is there anything that needs to be set in order to activate the NVLINK bridges in Ubuntu-Linux and also Windows 10? I thought that the two bridges would be automatically detected and that the links would then automatically be activated but this does not seem to be the case.
We also tried the command “nvidia-smi -i XX -dm TCC” to try and switch the Quadro GPU’s into “Tesla Compute Configuration” but this command is not valid for these GPU’s.
Is it possible that both NVLink bridges are defective? Is there anything else that you would suggest?
The motherboard is an ASUS WS 621-64L which fully supports four GPU’s and SLI/NVLink for the GPU’s.
Thank you for your advice in advance.