Does CUDA5.5 MPS supports concurrent executions of kernels from different processes?

What does this multi-process really mean in the level of kernel concurrency? Does it mean, like before, those kernels are serialized? Or they are time sliced? Or it’s possible that, at a time point, the GPU simultaneously runs thread blocks from those kernels?

Theoretically only if it is compute capability 3.5 (Titan and Tesla k20 device) by using HyperQ.

Is there some document to explain the details?

https://www.google.de/search?client=ubuntu&channel=fs&q=hyperQ+cuda+programming+guide&ie=utf-8&oe=utf-8&gws_rd=cr&ei=6JpuUpmYLczbsgarmYHgCg

Check the example from SDK http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CCsQFjAA&url=http%3A%2F%2Fdocs.nvidia.com%2Fcuda%2Fsamples%2F6_Advanced%2FsimpleHyperQ%2Fdoc%2FHyperQ.pdf&ei=6ZpuUoSBEsaQtAaciIHgCw&usg=AFQjCNGxnmyAEpsMk-C_r4s-cB_pLJwOZA&bvm=bv.55123115,d.Yms