My application doesn’t benefit significantly from using two GPUs. A better use of a multicore computer with two GPUs would be to run two processes simultaneously, with each process using its own GPU. The processes don’t communicate among themselves.
I know how to use cudaGetDeviceCount(), cudaGetDeviceProperties() and cudaSetDevice() to explicitly select one of the GPUs to use. But suppose process one is already running and using one of the GPUs. When process two starts, how can it tell which GPU is busy with process one and which GPU is available for its use? I’d rather not require the user to keep track of this and explicitly select one.
I thought about using cuMemGetInfo() and seeing which GPU has more memory available, but this technique could get confused if the two GPUs don’t have the same amount of memory.
I was hoping to find something useful in the context driver functions in the driver API, but can’t find anything.