Can two GPUs use one pinned memory?

I use two Tesla K20c GPUs.
Until now I created a buffer objects for each GPU to use pinned memory. Like this:

// create host buffers in order to avoid data transfers
cl::Buffer _in_matrix_hst_buffer = cl::Buffer( _context,
CL_MEM_READ_ONLY | CL_MEM_ALLOC_HOST_PTR | CL_MEM_COPY_HOST_PTR,
_M * _N * sizeof(cl_float),
in_matrix.get(),
&err
); check_error(err);

But now I would like to create one pinned memory that both Tesla K20c can access. This way only one buffer exists in host memory. Since I found out, that pinned memory is allocated using the mmap function with the flag MAP_FIXED, I tried to create my own pinned memory compatible buffer like this:

size_t siz = (1024 * 512 * sizeof(float));
int fd= shm_open(“region”, O_CREAT | O_RDWR, S_IRUSR | S_IWUSR);

ftruncate(fd, siz);
void* pre = mmap(A, siz, PROT_READ | PROT_WRITE, MAP_SHARED|MAP_ANONYMOUS, fd, 0);

float* Asm = (float*) mmap(pre, siz, PROT_READ | PROT_WRITE, MAP_SHARED|MAP_FIXED|MAP_ANONYMOUS, fd, 0);

After that Asm gets filled with data and a buffer is created like this:

// create host buffers in order to avoid data transfers
_in_matrix_hst_buffer = cl::Buffer( _context,
CL_MEM_READ_ONLY | CL_MEM_USE_HOST_PTR | CL_MEM_COPY_HOST_PTR,
_M * _N * sizeof(cl_float),
in_matrix.get(),
&err
); check_error(err);

I receive correct results with this method but it does not use pinned memory.

Therefore I would like to know if it is possible to use one pinned memory region for two GPUs.

Thanks in advance!

Best regards,

Fabian

In CUDA you would need to pass the cudaHostRegisterPortable flag to cudaHostRegister() in order to be able to use it from all devices.
Unfortunately I have no idea how that translates to OpenCL.