I have a dual core machine with a single PCI-16 card that controls 2 GPUs (one half of a S1070 Tesla unit). From what I understand, by default CUDA threads executes on device 0. I have modified my code such that I can, at compile time, choose either device 0 or device 1. I need to however make this a runtime decision, where my code needs to select device 1 if device 0 is already running some code, or the other way round. I’ve seen cudaChooseDevice() and other similar calls, but they dont solve the problem, since both devices are exactly identical. Any ideas on how I could go about this?
Exclusive mode–nvidia-smi --help.
Thanks for the tip, Ill take a look!
To elaborate just a tiny bit on Tim’s concise answer: set both GPUs to compute exclusive mode and then simply do not call cudaSetDevice in your application. The CUDA driver/runtime will take care of the rest!