Currently, I tried to use CUDA with multiple CPU threads.
At first, I implemented it by using runtime API but it does not work.
In the previous topics, I realized that just the thread who allocates a memory can use the memory and if I want to share the memory among threads, context migration is needed.
So, I will try to use driver API.
This is my plan…
Before staring main part of application, a thread creates n GPU contexts and allocates memories with each context.
When multiple CPU threads are invoked. Each CPU thread pushes a context of created context and use it to invoke kernels including memory copy operations between host and device.
The contexts are reused every frames and used by arbitrary thread.
After all frame are done, the memories are released by a CPU thread.
I want to know whether it is possible or not before I start to write my code.
Please give me your opinion or advice and let me know about your experience if you have similar experiment of my case.
Do you mean a GPU cannot have more than one context?
If my understanding is right, the threadMigration example of CUDA SDK makes two contexts per a GPU and each CPU thread(two CPU thread are used) use one context.
A context is bound to a single thread at a time–this is why the context migration APIs push a context onto the context stack to declare ownership of it and pop it to relinquish ownership.
Depending on your duty cycle for using the GPU, this works surprisingly better than I expected. I use CUDA in one of my programs to accelerate just one part of a longer analysis chain, so the GPU is underutilized by a single process. (It’s about a 20% duty cycle.) I can run four copies of this program on a Phenom with a GTX 280, and the context switch overhead is barely visible.
And now that I have a Core i7 to play with, I tried out 8 copies on one half of a GTX 295, and it was still pretty good, though visibly slower now that the CUDA device was oversubscribed. (That said, the hyperthreading+GPU actually gave me a 50% boost in efficiency with 8 processes over 4. Looks like Intel got HT right this time.)
I remember the switching overhead being a disaster in CUDA 0.8 when I last checked, but it is much better now. I still wouldn’t suggest it for programs that use the GPU 100% of the time, which is probably what tmurray is worried about.