Can I achieve multi-GPUs communication in one node through MPI?

Dear buddy:
I have 2 questions about multi-GPUs communication:
(1) Now I implement 2-GPUs communications on my computer, multi-core CPU and 2 Nvidia Gpus. Can I achieve the GPUdirection communication through MPI? My code is based on CUDA-aware MPI.
(2) Can I achieve GPUDirection storage on one computer with Nvme disk and Nvidia GPU? I don’t want to achieve that through Infiniband network card.
Thanks
Li Jian
from China

  1. Yes. Using CUDA-aware MPI shouldn’t be any different in this setting. Pass appropriate device pointers to your CUDA-aware MPI_Sendrecv. The CUDA-aware MPI will do the appropriate thing here, including using GPUDirect P2P transfer if the platform supports it.
  2. Theoretically yes. I don’t know if there are any productized GDS storage drivers for (local) nvme disk.