Hi there, we are using OptiX Prime and are observing reduced performance when more than one CUDA devices are installed in the system.
We tracked it down to be due to OptiX Prime’s automatic distribution on multiple GPUs. We can alleviate it to some extent by using rtpContextSetCudaDeviceNumbers() after creating the OptiX Prime context, however we still observe unwanted memory allocations on all available CUDA devices by OptiX Prime.
Is there a way to disable automatic use of multiple GPUs both in terms of memory and computation resources when using OptiX Prime?
When using the environment variable CUDA_VISIBLE_DEVICES we can achieve the desired effect, however we would prefer not having to rely on this workaround as according to https://devblogs.nvidia.com/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/ this is supposed to be used for testing not in production.