The documentatation says:
Is there any more information on where the danger lies? Specifically I’d like to have a host-thread create a context with its own geometry, sources, buffers, etc. and have it be assigned to a specific GPU through some command-line argument using setDevices. If I were to launch two contexts in this manner, one constrained to device 0 and the other to device 1, what exactly gives rise to potentially erroneous behaviour? I’ve tried a few experiments with my application using the above approach to process separate scenes in parallel, and have yet to find any errors. However I did notice that even with the device constraint placed on the OptiX context, when the context launched nvidia-smi reported an increase in memory usage by some amount on the unused device. It was a much smaller amount compared to the main device, but I’d still like to know why it increased at all. nsight showed that indeed only the constrained device was doing any work on the launched context.