Can i use mvapich with cuda fortran?

hello
I usually use openMPI in nvhpc.
I use fortran, cuda, openACC, MPI communication, cuda-awareMPI.

nvhpc includes openMPI, making it very convenient to use.
But I want to change the MPI module.

I downloaded MVAPICH-gdr and tested it.
https://mvapich.cse.ohio-state.edu/userguide/gdr/
I have used mvapich on the CPU MPI version successfully.
However, when I tried GPU MPI communication, it failed.
When compiling the source code, I use mpifort, which I downloaded.

Compilation error message below

“Use cudafor”
Fatal Error: Can’t open module file ‘cudafor.mod’ for reading at (1): No such file or directory

I set path manually

mvapich_gdr_path="/home/sr7/jsera.lee/mpi_test/mvapich_gdr/mvapich2/gdr/2.3.7/no-mpittool/no-openacc/cuda11.3/mofed5.6/mpirun/gnu8.5.0"
cuda_1130_path="/apps/cuda/RHEL8/cuda_11.3.0"

$mvapich_gdr_path/bin/mpifort \
    -I$mvapich_gdr_path/include \
    -I$cuda_1130_path/include \
    -L$mvapich_gdr_path/lib64 \
    -L$cuda_1130_path/lib64 \
    -lmpi -lcudart \
    cpu_test.f90 -o build_mvapich.out

I think i should use nvfortran when compiling Fortran code.
But when i using nvhpc, mpifort is nvfortran+openMPI.
my downloaded mpifort is gfortran+MVAPICH.
I want to use nvfortran+MVAPICH.
Is it possible?

It should be. Granted it’s been awhile since I’ve built MVAPICH myself, but you should be able to set “FC=nvfotran F77=nvfortran” when running configure to have it build with nvfortran. Though I can ask the folks here that do our MPI builds if there’s any thing else.

Scanning their User Guide, I did notice that they haven’t updated the docs to reflect our re-branding from PGI (pgfortran) to NVHPC (nvfortran), but any instructions for PGI would still be relevant.

Currently, after allocating GPU memory with openACC, I use MPI communication right away without copying it to the host (CPU).
I would like to change from openMPI to MVAPICH.

I tried to opened “mpifort” file with vi and edited it.

Original mvapich_gdr mpifort

 36 # Set the default values of all variables.
 37 #
 38 # Directory locations: Fixed for any MPI implementation.
 39 # Set from the directory arguments to configure (e.g., --prefix=/usr/local)
 40 prefix=/opt/mvapich2/gdr/2.3.7/no-mpittool/no-openacc/cuda11.3/mofed5.6/mpirun/gnu8.5.0
 41 exec_prefix=/opt/mvapich2/gdr/2.3.7/no-mpittool/no-openacc/cuda11.3/mofed5.6/mpirun/gnu8.5.0
 42 sysconfdir=/opt/mvapich2/gdr/2.3.7/no-mpittool/no-openacc/cuda11.3/mofed5.6/mpirun/gnu8.5.0/etc
 43 includedir=/opt/mvapich2/gdr/2.3.7/no-mpittool/no-openacc/cuda11.3/mofed5.6/mpirun/gnu8.5.0/include
 44 libdir=/opt/mvapich2/gdr/2.3.7/no-mpittool/no-openacc/cuda11.3/mofed5.6/mpirun/gnu8.5.0/lib64
 45 modincdir=/opt/mvapich2/gdr/2.3.7/no-mpittool/no-openacc/cuda11.3/mofed5.6/mpirun/gnu8.5.0/include
 46 
 47 # Default settings for compiler, flags, and libraries
 48 # Determined by a combination of environment variables and tests within
 49 # configure (e.g., determining whehter -lsocket is needee)
 50 FC="gfortran"
 51 FCCPP=""

modified mpifort using nvfortran, set path

 38 # Directory locations: Fixed for any MPI implementation.
 39 # Set from the directory arguments to configure (e.g., --prefix=/usr/local)
 40 prefix=/home/sr7/jsera.lee/mpi_test/mvapich_gdr/mvapich2/gdr/2.3.7/no-mpittool/no-openacc/cuda11.3/mofed5.6/mpirun/gnu8.5.0
 41 exec_prefix=/home/sr7/jsera.lee/mpi_test/mvapich_gdr/mvapich2/gdr/2.3.7/no-mpittool/no-openacc/cuda11.3/mofed5.6/mpirun/gnu8.5.0
 42 sysconfdir=/home/sr7/jsera.lee/mpi_test/mvapich_gdr/mvapich2/gdr/2.3.7/no-mpittool/no-openacc/cuda11.3/mofed5.6/mpirun/gnu8.5.0/etc
 43 includedir=/home/sr7/jsera.lee/mpi_test/mvapich_gdr/mvapich2/gdr/2.3.7/no-mpittool/no-openacc/cuda11.3/mofed5.6/mpirun/gnu8.5.0/include
 44 libdir=/home/sr7/jsera.lee/mpi_test/mvapich_gdr/mvapich2/gdr/2.3.7/no-mpittool/no-openacc/cuda11.3/mofed5.6/mpirun/gnu8.5.0/lib64
 45 modincdir=/home/sr7/jsera.lee/mpi_test/mvapich_gdr/mvapich2/gdr/2.3.7/no-mpittool/no-openacc/cuda11.3/mofed5.6/mpirun/gnu8.5.0/include
 46 
 47 # Default settings for compiler, flags, and libraries
 48 # Determined by a combination of environment variables and tests within
 49 # configure (e.g., determining whehter -lsocket is needee)
 50 FC="nvfortran"
 51 FCCPP=""
 52 #
 53 # Fortran 90 Compiler characteristics
 54 FCINC="-I"
 55 # f90modinc specifies how to add a directory to the search path for modules.
 56 # Some compilers (Intel ifc version 5) do not support this concept, and
 57 # instead need

The following error occurs:

[jsera.lee@agpu1321 my_test]$ ./my_mpifort mpi_test.f90
NVFORTRAN-F-0004-Corrupt or Old Module file /home/sr7/jsera.lee/mpi_test/mvapich_gdr/mvapich2/gdr/2.3.7/no-mpittool/no-openacc/cuda11.3/mofed5.6/mpirun/gnu8.5.0/include/mpi.mod (cpu_test.f90: 6)
NVFORTRAN/x86-64 Linux 22.3-0: compilation aborted

Is it the only way that I have to rebuild mvapich with the FC=nvfortran compiler included in nvhpc and use it?

The mvapich I downloaded is a version with the gdr feature added. If you unzip?(install) it with rpm, the bin, lib64, and include dirs of mvapich2_gdr will come out right away without building.
https://mvapich.cse.ohio-state.edu/downloads/#mv2gdr-23

Only rpm is provided for mvapich-gdr, and normal mvapich seems to be the only tarball file that can be built.
https://mvapich.cse.ohio-state.edu/downloads/#mv2.3.7

Yes, if you want to use the MPI module given it’s compiled with gfortran and Fortran modules are not interoperable between compilers.

Though, you can try including the F77 style “mpi.h” header file rather than using the “mpi” module to see if work around the issue.

Thank you for your kind answer.
I try to build mvapich using nvfortran.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.