I have some questions about the automatic context management offered by CUDA runtime API after reading some documentation, 6.2.1. Initialization and 6.31. Interactions with the CUDA Driver API.
I have three questions.
Q1. For example, I have the below code.
int main() {
cudaSetDevice(0);
cudaSetDevice(1);
cudaSetDevice(0);
}
I suppose the CUDA Runtime would behave as follows.
- Runtime sets up something and creates a primary context (context#0) for device 0.
- Runtime sets up something and creates a primary context (context#1) for device 1.
- Runtime sets up something and sets context#0 as the calling host thread’s current context.
Is my understanding correct?
Q2. I have two host threads. If the first host thread calls cudaSetDevice(0)
explicitly and then the second one starts and calls cudaSetDevice(0)
explicitly again.
For the second call, Runtime will not initialize any context but set the context that is already created as the second host thread’s current context.
Is it correct?
Q3. I learn that There exists a one to one relationship between CUDA devices in the CUDA Runtime API and CUcontext s in the CUDA Driver API within a process.
So no matter how many host threads call cudeSetDevice()
, if the primary context for the desired device doesn’t exist, Runtime creates it otherwise Runtime just sets the existing context as the calling host thread’s current context. Is it correct?
Any help will be appreciated.
Jack