CUDA multiple contexts

Hi all,
I am new to GPGPU and just designing the port of my application across and the doco seems thin at the top level - context management. Reading these forums it appears that one may run CUDA apps on a GPU that is also a display, indicating that the board has an exec to schedule multiple contexts, which seems sensible but no mention is made of running multiple CUDA contexts, each from different CPU threads. A mention that different contexts have their own virtual address space indicates we do run concurrent contexts.

In trying to design an overall implementation there are many questions about how contexts might be scheduled, such as allocation of blocks to multiprocessors, shared text between contexts, etc. On the other hand some topics (such as lack of support for kernel execution in parallel with host transfers) indicate one can only fire off one context at a time for any single device… which is it?

I cannot play with it yet as I am waiting for the Linux amd64 CUDA driver, which I am glad to hear is coming next month. Time to go out and buy a card.

Overall this looks like awesome stuff - Thanks!


OK I found this thread [url=“http://forums.nvidia.com/index.php?showtopic=28823”]The Official NVIDIA Forums | NVIDIA and the answer is currently NO, for anyone else looking.