Two way peer access between two GPU cards

Hello,

Our GPU sub cluster is a set of 12 IBM iDataPlex dx360 M4 machines each containing 2 GPU cards. Due to the constraints of these machines it is not possible to place the two GPU cards on the same IOH subsystem. So, for example, when I run a ‘p2p’ check I see the following output…

CUDA_VISIBLE_DEVICES is unset.
CUDA-capable device count: 2
GPU0 " Tesla K20m"
GPU1 " Tesla K20m"

Two way peer access between:
GPU0 and GPU1: NO

Am I correct in thinking that if the cards are not installed on the same IOH subsystem then it is not possible to enable P2P access between the cards?

I recently installed an application called Amber and I noted that the performance of this package was terrible over two GPUs. If there anything that I can do to improve Multi GPPU performance in our nodes? Bear in mind that these nodes are now out of warranty and so we cannot justify any significant expenditure on them. Does anyone have any advice in this situation, please?

Best regards,
David

Peer-to-peer requires that the two GPUs are connected to the same PCIe root complex. These days, CPUs incorporate a PCIe root complex, so only GPUs connected to the same CPU socket can do peer-to-peer.

Questions about AMBER GPU acceleration are probably best directed to the AMBER mailing list: http://ambermd.org/#reflector
Multi-GPU scaling will depend on what kind and size simulation you run, compare published AMBER benchmark results: http://ambermd.org/gpus/benchmarks.htm#Benchmarks
AMBER is under continuous development, do you use the latest AMBER version available?