Can I utilize Concurrent Kernel Execution among processes with the same context?

I want to share the Kepler GPU among several processes, so that the GPU kernels from different processes can execute concurrently on the GPU. Can I use the CUDA driver API to create a common CUDA context and then share it with all the processes? Besides context, are there any other constrains that blocks multiple processes to do concurrent kernel execution on the GPU?