Interop with CUDA Sharing device memory between OpenCL and CUDA

Hi everyone,

I’m working on a project where two teams have large sets GPU code. Ours chose to use OpenCL and theirs chose to use CUDA. Is there any way to get a valid cl_mem pointer from a CUDA context (or vice versa)? Otherwise is there any means of copying data from a CUDA pointer to an OpenCL pointer?

The worst case scenario is that we copy to host memory and then back to device. I’d like to avoid this if at all possible.

Thanks in advance for any help!

–Mike

I don’t know of any supported way to to this, but you could try to create a OpenGL buffer object and then use OpenCL’s/CUDA’s OpenGL sharing capabilities to create a buffer for each compute API. There will be some handling of mapping/unmapping each time you want to switch between CUDA and OpenCL, but hopefully better than copying via host.

Never tried it so let me know if it works.