nvidia-smi EXCLUSIVE_PROCESS

while looking through Robert Crovella’s answer on CUDA’s MPS in stackoverflow: https://stackoverflow.com/questions/34709749/how-do-i-use-nvidia-multi-process-service-mps-to-run-multiple-non-mpi-cuda-app

I came across this line of code:

nvidia-smi -i 2 -c EXCLUSIVE_PROCESS

So I looked it up in the nvidia-smi doc: http://developer.download.nvidia.com/compute/DCGM/docs/nvidia-smi-367.38.pdf
but it doesn’t provide any helpful details on what “EXCLUSIVE_PROCESS” flag does.

here is a relevant snippet from the doc:

Compute Mode
The compute mode flag indicates whether individual or multiple compute
applications may run on the GPU.
“Default” means multiple contexts are allowed per device.
“Exclusive Process” means only one context is allowed per device,
usable from multiple threads at a time.
“Prohibited” means no contexts are allowed per device (no compute
apps).
“EXCLUSIVE_PROCESS” was added in CUDA 4.0. Prior CUDA releases sup-
ported only one exclusive mode, which is equivalent to “EXCLU-
SIVE_THREAD” in CUDA 4.0 and beyond for all CUDA-capable products.

My questions are:

  1. So is “EXCLUSIVE_PROCESS” the same as “Exclusive Process”?
  2. why is it necessary to turn it on before using MPS?
  1. Is "EXCLUSIVE_PROCESS" the same as "Exclusive Process"? - Yes
  2. Why is it necessary to turn it on before using MPS? It is not necessary. But it is a good idea to ensure all your CUDA processes access the GPU via MPS, rather than having some accidentally connecting to the GPU directly, or running multiple MPS servers by mistake.

Thanks tera!

A description is also given in the MPS documentation.

Thanks!