Compiling Quantum ESPRESSO with GPU support

Hi, I’m trying to compile Quantum ESPRESSO with GPU support for ph.x (latest dev branch, so I can’t use the NGC singularity image) on my university cluster with P100s.
I installed the HPC SDK and followed the instructions here: Problems installing Quantum epsresso with GPU acceleration. My issue is that the GPU accelerated libraries from the HPC SDK aren’t being recognized:

./configure --with-cuda="/central/scratch/(username)/nvidia/hpc_sdk/Linux_x86_64/22.7/cuda" --with-cuda-runtime=11.7 --with-cuda-cc=6.0 --enable-openmp --with-scalapack='intel' --with-cuda-mpi=yes --libdir="/central/scratch/(username)/nvidia/hpc_sdk/Linux_x86_64/22.7/math_libs"

However, the output of make is:

directory QEHeat/src : ok okkokCI : ok
directory ACFDT/src : not present in /central/scratch/(username)/q-e-develop/install
directory KCW/PP : okk
all dependencies updated successfully
checking build system type... x86_64-pc-linux-gnu
checking ARCH... x86_64
checking setting AR... ... ar
checking setting ARFLAGS... ... ruv
checking for gfortran... gfortran
checking whether the Fortran compiler works... yes
checking for Fortran compiler default output file name... a.out
checking for suffix of executables... 
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU Fortran compiler... yes
checking whether gfortran accepts -g... yes
checking for Fortran flag to compile .f90 files... none
configure: WARNING: F90 value is set to be consistent with value of MPIF90
checking for mpif90... mpif90
checking whether we are using the GNU Fortran compiler... yes
checking whether mpif90 accepts -g... yes
checking version of mpif90... gfortran 4.8
setting F90... gfortran
setting MPIF90... mpif90
checking whether we are using the GNU C compiler... yes
checking whether icc accepts -g... yes
checking for icc option to accept ISO C89... none needed
setting CC... icc
setting CFLAGS... -O3
using F90... gfortran
setting FFLAGS... -O3 -g -fopenmp
setting F90FLAGS... $(FFLAGS) -cpp -fopenmp
setting FFLAGS_NOOPT... -O0 -g
setting CPP... cpp
setting CPPFLAGS... -P -traditional -Uvector
setting LD... mpif90
setting LDFLAGS... -g -pthread -fopenmp
checking whether Fortran compiler accepts -Mcuda=cuda11.7... no
configure: error: You do not have the cudafor module. Are you using NVHPC compiler?

I have modified my PATH and MANPATH and LD_LIBRARY_PATH as in the instructions in the link. Previously, I somehow got it to use nvfortran but it didn’t recognize cublas, cufftw, etc.

Any help or advice would be amazing. Cheers!

I’ve not built Quantum ESPRESSO myself, but I know some folks who have so can ask them if we need more detailed help.

Here, it looks like you’re using gfortran and icc. Try setting the environment variable “F90=nvfortran”, “CC=nvc”, and “CXX=nvc++” before running configure so you’re using the NV HPC Compilers. Also make sure the mpi90 in your PATH is set to the bin directory for a MPI that’s configured for use with nvfortran. We ship pre-build OpenMPI and HPCX with the HPC SDK if you want to use one of those. They’ll be in the sub-directories found under “/central/scratch/(username)/nvidia/hpc_sdk/Linux_x86_64/22.7/comm_libs”.

Previously, I somehow got it to use nvfortran but it didn’t recognize cublas, cufftw, etc.

I’m not positive, but it may just be that you need to add the CUDA version you’re using to the end of path you have in the “–libdir” setting. For Example ‘–libdir=“/central/scratch/(username)/nvidia/hpc_sdk/Linux_x86_64/22.7/math_libs/11.0”’

-Mat

Hi Mat!
setting the environment variables worked perfectly. However, I am unable to test the build. If I run pw.x without mpirun, it just runs on 1 cpu core, as expected. I tried using one of the mpiruns located in the NV HPC SDK (there are many), but it complained about not finding a .so file. Which mpirun should I be using?

Cheers!

You’d use the same mpirun as the one you set in your PATH environment variable that you used to build.

but it complained about not finding a .so file.

Most likely you just need to set the LD_LIBRARY_PATH environment variable to the “lib” directory for the MPI install.

For example if your PATH includes “/central/scratch/(username)/nvidia/hpc_sdk/Linux_x86_64/22.7/comm_libs/openmpi4/openmpi-4.0.5/bin”, then you’d set LD_LIBRARY_PATH to include “/central/scratch/(username)/nvidia/hpc_sdk/Linux_x86_64/22.7/comm_libs/openmpi4/openmpi-4.0.5/lib”. The loader will then be able to find the MPI runtime libraries.

Hope this helps,
Mat