CUDA and CPU threads

We have 4 GPUs on a node and I intend to use the GPUs on the node to run an executable with the same input file, inputFile and a parameter file, parmFile (however with different parameters
on each CPU thread <=> GPU).

Since only one CPU thread can use a GPU, can I be guaranteed that if I do the following on the node,

./exefile inputFile
./exefile inputFile
./exefile inputFile

that these will run on three different CPUs?

The parameter file has the device ID (which is passed to the cudaSetDevice within the code) and an input parameter, p is changed for each run.
Note that the parameter file is changed before I invoke each executable.

Hence, for example, ./exefile inputFile could run on a gpu with device ID 0 for a certain input, p1
./exefile inputFile runs on a gpu with device ID 1 for a certain input, p2

Check out nvidia-smi and exclusive mode–that should help.

The one context per CPU thread restriction is a per-process restriction, not a system-wide restriction, so you could in theory oversubscribe the GPUs with more than one context each. However, that’s probably not what you want, and exclusive mode should let you do what you want.