Mpif90 and nfvortran compatibility issues

Hello. I am very new to CUDA, especially with Fortran. I am also a novice with makefiles, generally borrowing from others. Hopefully I’m not making an embarrassingly simple mistake!

I am having an issue compiling a project that uses both CUDA and MPI. I am using the nvfortran compiler for CUDA and mpif90 for MPI. Basically, from my main program I want to “use CUDA_module” and “use MPI_module”. However, when compiling my main f90 file with mpif90 I get the error:

mpif90 -O0 -Mbounds -traceback -cuda -c main.f90
ifort: command line warning #10006: ignoring unknown option ‘-Mbounds’
ifort: command line warning #10006: ignoring unknown option ‘-cuda’
main.f90(4): error #7013: This module file was not generated by any release of this compiler. [CUDAMOD]
use cudamod

I have seen the following threads which are similar:

However the first two are at the linking stage, while I’m still compiling, and either way my -Mbounds and -cuda flags are being ignored anyways. I’m not sure if the last one is an issue since based on the table at
it seems like mpif90 and nvfortran should be compatible. Note I am not at UCAR but using a different computing cluster, which may be why I do not see compatibility?

I am trying this with a very simple code that essentially just initializes MPI, then initializes a GPU, then prints a bunch of hello worlds before finalizing MPI. The codes are:


program mpicuda

use mpimod
use cudamod

implicit none

integer :: i, rank

call mpiinit(rank)
call usegpu(rank)

do i = 1,10
   print*,"Hello world ",i," from rank ",rank
end do

call finilizempi()

end program


module mpimod

  use MPI


      subroutine mpiinit(myrank)
         implicit none

         integer, intent(out) :: myrank
         integer :: rank, ierr, num_procs

         call MPI_INIT(ierr)
         call MPI_COMM_SIZE(MPI_COMM_WORLD, num_procs, ierr)
         call MPI_COMM_RANK(MPI_COMM_WORLD, rank ,ierr)
         myrank = rank

       end subroutine mpiinit

       subroutine finalizempi()

          implicit none
          integer :: ierr

          call MPI_FINALIZE(ierr)


       end subroutine finalizempi

end module mpimod


module cudamod

use cudafor

implicit none


   subroutine usegpu(rank)

      implicit none

      integer, intent(in) :: rank
      integer :: istat, nblock, ngrid

      ngrid = 32
      nblock = 32
      if (rank .eq. 0) then
         istat = cudaSetDevice(0)
      end if

      call gpukern<<<ngrid, nblock>>>()


    end subroutine usegpu

    attributes(global) subroutine gpukern()
       implicit none
       integer :: i

        i = (blockidx%x-1)*blockdim%x + threadidx%x

    end subroutine gpukern

end module cudamod




Debug Flags

FCFLAGS:=-O0 -Mbounds -traceback

FILES:= cudamod.o mpimod.o main.o

MODS:=$(wildcard *.mod)

UNAME_S:=$(shell uname -n)
RM:=rm -fv

.SUFFIXES: .o .f .f90 .cuf

all: ${EXE}

{EXE}: {FILES} {MODS} {FCMPI} -cuda -o @ {FILES}

{FCCUDA} {FCFLAGS} -cuda -c cudamod.cuf

{FCMPI} {FCFLAGS} -cuda -c mpimod.f90

{FCMPI} {FCFLAGS} -cuda -c main.f90

%.mod: %.f90
@echo “Some modules are out of date. Do clean and then recompile”
{RM} @ ${EXE}

.PHONY: clean

{RM} *.o {RM} *.mod
{RM} {EXE}

It doesn’t seem like there should be an issue. I have also tried -fc=/bin/nvfortran in the makefile to get mpif90 to use the right compiler, but then I have a similar error output as in the third thread I linked above (NVFORTRAN-F-0004-Corrupt or Old Module file). Is it impossible to get a file compiled with mpif90 to ‘use’ one with nvfotran? What would be my alternative? Is my version of mpif90 out of date and so doesn’t recognize nvfotran? I have done something similar in C (mpixx + nvcc) with no issues getting them to compile and link, but there I think the usage of a header file resolves issues in compiling.

EDIT: Looks like some of the spacing in the makefile copied weird, but all the tabbing is fine in my file.

Looks like the mpif90 driver you’re using is configured for use with Intel’s ifort compiler which doesn’t recognize these command line options. You need to use an MPI configured for use with nvfortran in order to use CUDA Fortran.

We ship OpenMPI with the compilers which you can use found under the “comm_libs/mpi” directory of your compiler install. Or talk with the UCAR admins on which module you need to load.