recently I am interested parallelism on GPU, but I some problems puzzled me a lot:
(1) As the CUDA document says, two contexts cannot run concurrently on the GPU. I wonder why CUDA doesn’t support it? Because hardware doesn’t support or CUDA implementation doesn’t consider contexts concurrency?
(2) Are there any documents show details about GPU context, such as context swithing procedure, isolation(e.g. GPU page table) mechanism among different context? I’m grateful for showing me them.