Pgic++ plus mpi and issues with restrictive ptrace settings

Hello,
I am trying to setup the PGI compiler community ed. 19.10.

The compiler works normally, but on my first test with mpic++ (which I installed from the same pgi package) it produces unexpected warning message:

 eric@hostname  ~/Projects/MPI_tests  mpic++ main.cpp 
 eric@hostname  ~/Projects/MPI_tests  mpirun -n 2 a.out 
--------------------------------------------------------------------------
WARNING: Linux kernel CMA support was requested via the
btl_vader_single_copy_mechanism MCA variable, but CMA support is
not available due to restrictive ptrace settings.

The vader shared memory BTL will fall back on another single-copy
mechanism if one is available. This may result in lower performance.

  Local host: hostname
--------------------------------------------------------------------------
Rank 0 OK!
Rank 1 OK!
[hostname:60252] 1 more process has sent help message help-btl-vader.txt / cma-permission-denied
[hostname:60252] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages

which was produced by the code

#include <mpi.h>
#include <iostream>

int main(int argc, char* argv[])
{
          MPI_Init(&argc, &argv);

            int rank;
              MPI_Comm_rank(MPI_COMM_WORLD, &rank);
                if (rank == 0) {
                            int value = 17;
                                int result = MPI_Send(&value, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);
                                    if (result == MPI_SUCCESS)
                                                  std::cout << "Rank 0 OK!" << std::endl;
                                      } else if (rank == 1) {
                                                  int value;
                                                      int result = MPI_Recv(&value, 1, MPI_INT, 0, 0, MPI_COMM_WORLD,
                                                                                          MPI_STATUS_IGNORE);
                                                          if (result == MPI_SUCCESS && value == 17)
                                                                        std::cout << "Rank 1 OK!" << std::endl;
                                                            }
                  MPI_Finalize();
                    return 0;
}

This does not look normal. What shall I try to solve this issue?

I don’t know much about this warning, though don’t think it should effect correctness but might lower performance a bit in some situations. It’s warning you that due to a restrictive ptrace policy on your system, shared memory transfers can’t be performed in zero-copy mode so are falling back to single-copy which may cause additional latency. Though, if you’re not using shared memory, it probably has no effect.

You might try adding the option “mpirun --mca btl_vader_single_copy_mechanism none” to see if it disables the warning.

Some useful links that I’ve found on the issue:

https://groups.io/g/OpenHPC-users/topic/openmpi_and_shared_memory/16489081