CudaContexts - are they paged in/out automatically ?

I’m wondering whether different Cuda Contexts (in the ‘Runtime API’, not in the Driver API) are transparently for the ‘user’ paged in / out of the GPU Memory automatically, by the Device Driver ?
As in Cuda Runtime API each CPU thread has its own ‘private’ Cuda Context, this would have the advantage that in one application i could use different CPU threads, each of them being able to access all of the GPU resources (e.g. every thread could allocate up to 100% of the GPU memory for itself).

Doesn’t appear so, as I cannot run Seti@Home and Folding@Home at the same time on GPUs having 384 MB of memory only.

This could be because both apps request a lot of page locked memory - or because swapping CUDA contexts in and out of the GPU memory is not currently available as a feature. But would this feature even be desirable? Your GPU app would lose all of its speed benefit due to swapping.

I thought that this was a feature of the Vista/Win7 driver model (WDDM)? And one of the reasons it’s not the best choice for CUDA apps (until the compute-only driver is released, anyway).