Using Multiple Host Threads with Unique Contexts tied to Devices

Hey all,

The documentatation says:

Is there any more information on where the danger lies? Specifically I’d like to have a host-thread create a context with its own geometry, sources, buffers, etc. and have it be assigned to a specific GPU through some command-line argument using setDevices. If I were to launch two contexts in this manner, one constrained to device 0 and the other to device 1, what exactly gives rise to potentially erroneous behaviour? I’ve tried a few experiments with my application using the above approach to process separate scenes in parallel, and have yet to find any errors. However I did notice that even with the device constraint placed on the OptiX context, when the context launched nvidia-smi reported an increase in memory usage by some amount on the unused device. It was a much smaller amount compared to the main device, but I’d still like to know why it increased at all. nsight showed that indeed only the constrained device was doing any work on the launched context.

Thanks

I guess the documentation warns about using a single context in multiple threads in parallel.
Having one context per thread should work.

Ah I see, so you could even dispatch multiple contexts in parallel to the same device so long as each context is unique to a host thread?

That sounds great, thanks!

Looking at the section on OptiX-CUDA interop I come across:

Is this indicating that an OptiX context can only interoperate with at most one CUDA context on a device, or that in general if any OptiX context wants to interoperate with a CUDA context there must only be one CUDA context per device regardless of the number of OptiX contexts? I’m leaning towards the latter but I’d like a confirmation.

The ramification here is that while you can launch multiple OptiX contexts on a device, each mapped to its own host thread, each of those OptiX contexts cannot spawn its own unique CUDA context for post-processing.

Thanks