Mixing driver and runtime API code Using contexts in high level cuda code


I coded an app that uses the runtime API (cuda* functions), but it is very desireable for me to have control over the context parameters, and pushing and popping them. Can I add cuCtx* calls into my cuda* code, without breaking anything? Or should I migrate everything to work with the low level driver API directly? According to the docs, they are mutually exclusive so I should not use them together. Thoughts?

As far as i know it isn’t possible, you will get a conflict. I ended up with 2 versions of host code because of this. But i for one prefer the driver code.


I have writen a small library that reimplements the cuda runtime calls in order to call the cuda driver API. It allows to use the driver API and at the same time compile the code using nvcc with the kernel_name<<< >>>() syntax.

I will make it public soon, if you are interested, just ask me and I can send you the code.

Driver/runtime API interop works with everything except context migration. Context migration still causes things to explode catastrophically.

great every thing except the one thing i needed …

Great news! :)

Another piece of CUDA gets an open replacement…

Is the whole CUDA Runtime supported?

Any chance that it can be altered to use OpenCL rather than the Driver API? (assuming some source-to-source translator takes care of the device code)

If so, that would be… interesting. ;)

Could you explain in a little detail what does not work?