Good reference/examples for CUDA fortran with MPI, please?

Hello everyone:

I would like to use both CUDA Fortran with MPI.

Could anyone suggest any good references or examples for this, please?

Thank you,

1 Like

Hi Erin,

Glad to hear from you!

Sorry, I don’t have anything offhand, but the two are largely separate so there’s nothing special. The only two things would be if you’re wanting to use CUDA Aware MPI and per rank device assignment.

To use CUDA Aware MPI, simply pass the device pointers to the MPI calls. You’ll need to have an MPI that has support of CUDA Aware MPI, but both the OpenMPI and HPCX we ship have this enabled by default.

There’s a few ways to do rank to device assignment.

  1. Write a wrapper script which sets the environment variable CUDA_VISIBLE_DEVICE to the local rank id.

  2. Add a call to cudaSetDevice

Here’s an example on how to do this. I use the local rank id and then do a mod operation on the number of devices to round robin the assignment. For example:

% cat test.CUF

program test
  use mpi
  use cudafor
  implicit none
  integer  :: nranks, myrank
  integer :: dev, devNum, local_rank, local_comm, ierr

  call mpi_init(ierr)
  call mpi_comm_size(mpi_comm_world,nranks,ierr)
  call mpi_comm_rank(mpi_comm_world,myrank,ierr)

  call MPI_Comm_split_type(MPI_COMM_WORLD, MPI_COMM_TYPE_SHARED, 0, &
       MPI_INFO_NULL, local_comm,ierr)
  call MPI_Comm_rank(local_comm, local_rank,ierr)
  ierr = cudaGetDeviceCount(devNum)
  dev = mod(local_rank,devNum)
  ierr = cudaSetDevice(dev)

  if (local_rank .eq. 0) then
      print *, "Number of devices: ", devNum
  print *, "Rank #",myrank," using device ", dev
  call MPI_finalize(ierr)

end program test
% mpif90 test.CUF
% mpirun -np 4 a.out
 Rank #            3  using device             1
 Rank #            2  using device             0
 Number of devices:             2
 Rank #            0  using device             0
 Rank #            1  using device             1

Hope this helps,

1 Like

This is great!

Thank you so much!