Is it possible to write a CUDA program that uses one CPU to drive more than one GPU?? :rolleyes:
Is there any restriction that there is one -to -one mapping between CPU-GPU??
since the CPU has to wait until the GPU completes the computation…instead it can be used to give compute instructions for another GPU.
You will basically need a CPU thread for each GPU you want to address.
cudaSetDevice works within the thread that invoked it, so there’s a 1-1 mapping between a GPU and a CPU thread. You can naturally run more than one thread on a CPU.