Please use a new thread for new questions.
This thread was about the OptiX 1 - 6 API and the sentence “Multi-GPU support with OptiX is more or less automatic.” is completely the opposite in OptiX 7.
The OptiX 7 API knows nothing about multiple devices! All GPU device handling happens in native CUDA code and everything concerning multiple GPUs needs to be handled by setting the correct devices and using the correct CUDA streams on them.
Two of my OptiX 7 examples described here https://forums.developer.nvidia.com/t/optix-advanced-samples-on-github/48410/6 are specifically designed to show different multi-GPU rendering strategies, where multiple GPUs work on the same sub-frames of a progressive Monte Carlo path tracer.
What exactly would you want to do with MPICH and OptiX?
I have no experience with MPICH but glossing over one of the introductory presentations, that all boils down to inter-process communication (IPC) methods across systems.
While you can do whatever you want with CUDA buffers between OptiX launches, sharing some core data among different systems and processes in OptiX is hardly going to work.
OptiX programming requires native CUDA control, not some higher level libraries managing inter-process communication. For example you cannot build acceleration structures from managed memory allocations. https://forums.developer.nvidia.com/t/optix-7-4-cudamallocmanaged/195302 and with even more issues for IPC:https://forums.developer.nvidia.com/t/long-shot-access-mesh-data-in-different-program-but-already-loaded-in-the-gpu/157117/2