cudaSetDevice switch to different thread?


Can some please clear me about the cudaSetDevice()?
In the programming guide pdf file, it was said:

cudaSetDevice() is used to select the device associated to the host thread:
A device must be selected before any global function or any function from Appendix D is called. If this is not done by an explicit call to cudaSetDevice(), device 0 is automatically selected and any subsequent explicit call to cudaSetDevice() will have no effect.

Can one device be switched to different CPU threads and share the device and const resources? Thanks!


Hi Steven,

Resources can not be shared between different GPUs you need to allocate per GPU

Perhaps I get this wrong, but you want to use some data with kernel1 on GPU1 and then reuse this data with kernel2 on GPU2.
In general there should be no problem to do so, however you have to explicitly control your CPU threads at which time which kernel is called. AFAIK there is no memory sweep done on the GPU when you enter or exit a kernel. Of course you have to pass the memory pointer inside you CPU threads.