OpenMPI and MVAPICH2

Dear users and developers,

We are running a 16 blade cluster with four AMD Opteron 12 core CPUs per node and infiniband interconnect with the CDK 11.10-0 release.
OpenMPI 1.4.4 and MVAPICH2-1.7 are installed. However MPI debugging and profiling is working only partially:
Debugging with MVAPICH2 works just by invoking the debugger via:
pgdbg -mpi:mpiexec -np 4 ./a.out.
With OpenMPI it does not since the mpiexec seems to work different. The PGITools manual suggests to find the environment variables which are associated with PGDBG_MPI_RANK_ENV and PGDBG_MPI_SIZE_ENV and set them accordingly. For OpenMPI these are OMPI_COMM_WORLD_RANK and OMP_COMM_WORLD_SIZE. To my opinion the actual problem is passing them to the PGDBG variables properly.
Are there any ideas if this will make the debugger work or who this can be achieved?
The situation for the profiler is similar:
Profiling with MVAPICH2 works for C code if it is compiled with e.g. Mprof=mpich2,lines, but it does not for Fortran.
Profiling with OpenMPI works for C code after editing the compiler wrapper data files of the OpenMPI installation as described in the PGITools manual, but doing the same for the Fortran wrapper data files enables profiling for process 0 only.
Are there any ideas or did anyone encounter the same issue?

Thanks in advance

BWB

Dear BWB:

Sorry you are having trouble with the tools.

First, the PGI tools aren’t officially supported with MVAPICH2. Until such time as we support it, we can’t provide much help there.

With respect to OpenMPI, that is officially supported. We are looking into the behaviour that you are reporting and will reply again once we have determined what is going on.

–Don

Here is a workaround for the OpenMPI debugging issue:

To be able to debug OpenMPI programs using PGDBG, PGDBG-specific MPI
environment variables: PGDBG_MPI_RANK_ENV and PGDBG_MPI_SIZE_ENV, shall be
set to corresponding OpenMPI environment variables. A user can debug OpenMPI
programs using following command:

pgdbg -mpi:<absolute_path_to_mpiexec> -x PGDBG_MPI_RANK_ENV=OMPI_COMM_WORLD_RANK -x
PGDBG_MPI_SIZE_ENV=OMPI_COMM_WORLD_SIZE <other_mpiexec_parameters>
<executable_path>

We are continuing to investigate the OpenMPI / Fortran profiling issue.

Forwarding the environment variables as mentioned works perfectly.