I have a question :
Let’s say I have 2 GPU:s in my system and I have 2 host threads running cuda code.
If I select the same device using cudaSetDevice for both of my threads and thread 1 wants to start work with the GPU but thread 2 is currently occupying the first GPU, will thread 1 then switch to the second GPU ( as it could if I understand the auto select free gpu feature correct) or will it wait until the first GPU is free again?
Thanks in advance!
If you do cudaSetDevice(x) in a thread, subsequent CUDA operations in that thread will go to device x. There will be no auto switching. If your GPU is set in default compute mode or exclusive_process mode, both threads will share the GPU. There will be no explicit waiting “until the GPU is free again”. If you attempt to do a cudaMalloc in both threads, for example, both operations will attempt to reserve space on the GPU memory. If your compute mode is set to exclusive_thread, then one thread will get to use the GPU and cuda operations in the other thread will return an error.