in one project of mine, I am running an independent thread in which computations via CUDA are supposed to be made. I’ve searched around and am now a bit confused on how to do this, i.e. how to properly initialize everything so that CUDA calls can be made from that thread.
I know that, via the CUDA Driver API you can manage the contexts your self and pop and push them into different threads, using cuCtxCreate, cuCtxPopCurrent and cuCtxPushCurrent. However, I found some posts that indicate, that since CUDA 3.2 this can also be done via the CUDA Runtime API. But I am not sure how to do this, there does not seem to be any cuda* functions in the runtime API for this. The project in question currently uses CUDA 3.2.
Any hints for this?
// forgot to mention: the main thread in this project also makes some CUDA calls, that’s why I need to make CUDA calls from different threads (it’s a bit messy), otherwise I guess I would only have to create the CUDA context in my thread and not worry about it.