I’ve got a MPI program running on a linux node with multiple GPUs and CPUs. On one MPI task, I set up both an OpenGL and a CUDA context. How do I make sure both contexts are attaching to the same GPU? For OpenGL, each GPU appears as seperate numbered X "screen’, whereas CUDA gives device numbers through its API. I may eventually want to move data between OpenGL and CUDA, but for now, I want to just prevent different tasks from trying to use the same GPU. Any advice on how to proceed?
Jon