Is there any solution for combining MPI and OpenCL

I know there is some solution for combining MPI and CUDA, such as MPICH2 (am I right?).

But I want to ask a question, is there any solution for combining MPI and OpenCL?

And is there some solution for combining MPI, OpenMP, and CUDA or OpenCL?

Of course, here is solution to do this link. It is example how to combine MPI witch cuda. To add openmp just add -fopenmp flag to compilation command

nvcc -Xcompiler -fopenmp -c kernel.cu  //if you want to use openmp in *.cu file

mpicc -fopenmp -o mpicuda main.c kernel.o -lcudart -L /usr/local/cuda/lib64 -I /usr/local/cuda/include //if you want to use openmp in mpi file

Have fun ;)

Are the mpi libraries cuda aware now?

sorry but I don’t understand Your question :unsure:

If I would want 2 gpu on 2 different nodes to communicate I would have first to copy the data to the hos, then call mpi to transfer the data between the node and then comp the data from the second host to the second gpu. An mpi aware library would detect that the data is on gpu and make the copy to the host and from the host automatically. This would simplify the programming a little .

mvapich2 is for sure since it implements specific optimisations based on GPUDirect as explained here. I’m not sure whether this is the case neither for the other free usual suspects libraries such as mpich2 or openMPI, nor for the vendor’s proprietary implementations.