MPI_send and MPI_recv is hanging when count > 64 for MPI_FLOAT

Hi, All

I am encountering a problem when using MPI_Send and MPI_Recv. when the number of count < 64, the overall problem runs without any problem, while for count > 64 the program is hanging.

is there any solution to this? the address is on the global memory address on two GPUs.

Thanks!

Here is the code I use. When I set the n<=64 it works, otherwise it hangs.

#include <stdio.h>
#include <string.h>
#include <mpi.h>

int main(int argc, char *argv[])
{
    char *d_msg;
    int myrank, tag=99;
    MPI_Status status;

    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &myrank);

    const int n = 65;
    const int num_GPUs = 2;

    cudaMalloc((void**)&d_msg, n*sizeof(float)); 

    MPI_Send(d_msg, n, MPI_FLOAT, (myrank + 1)%num_GPUs, tag, MPI_COMM_WORLD);
    MPI_Recv(d_msg, n, MPI_FLOAT, (myrank - 1 + num_GPUs)%num_GPUs, tag, MPI_COMM_WORLD, &status);

    MPI_Finalize();
    return 0;
}

answered here