GPU Computing with PyCuda and GPU intensive VR application

I am developing a Virtual Environment for Data Analysis. One of the Machine Learning processes uses Numba- Pycuda to run computations on the GPU. This is on Anaconda. Simultaneously, a VR application needs to run. The VR application is built with C++ and OpenGL and runs in Visual Studio.

If I understand correctly, the GPU can handle one context at a time. So does that mean that the script and the VR application are competing for the GPU?

Is there a way to control this access?
I have a GTX 1070 and an intel integrated graphics. Is it possible to control the sharing of the hardware resources?

I am trying to get the best performance out of the ML and VR operations. So, please suggest a good way to do this.

yes, currently, GPUs either do computation, or graphics, but not both, in any given instant.

there isn’t any way to control this, other than to “not do” one or the other. Otherwise context switching is under control of the GPU driver, without controls or published heuristics.

If you have two GPUs, you could instantiate the OpenGL context on the Intel GPU, and the CUDA context/activity on the NVIDIA GPU. This assumes you don’t intend to do any CUDA/OpenGL interop.

Thanks for the reply.

how about nvidia-cuda-mps-control?

It’s a great piece of software, you can read about it by googling “CUDA MPS” and read the PDF manual. You’ll also find various descriptions I’ve given of it on this forum with a bit of googling.