Sharing variables between CUDA programs IPC on the video card without cudaMemcpy

This was migrated from another thread at the suggestion of another forum member.

Let’s say there is a program that deals with large amounts of memory. This hypothetical program could be running on the card, in addition to a program (like an OS kernel) that would take care of swapping out memory. The idea here would be to let the main program keep running while the secondary one swaps out memory/results/new input. Is there some way to do safe IPCs between programs in CUDA running on the GPU? I was thinking of running two processes on the CPU, and have the CUDA programs share some state variable so they can communicate. I know we would need compute capability 1.1 hardware to implement locks, but from what I’ve read each CUDA program has its own memory space, so sharing variables between programs is not possible at this point. Is this correct? Can this be fixed?

-Jeff

This does not work. You can only run one kernel at a time on the GPU. If you interleave the two kernels however, you don’t need synchronization (between them, you might need it between the threads) as one kernel is guaranteed to finish before the next one starts.

Peter