How to allocate GPU devices for processes dynamically

If I have a node with N GPU devices, and I will run two applications, APP1 and APP2, on it. APP1 will use m devices and APP2 will use n devices, while N > m+n.
Before these two applications were built, the developer didn’t know the exact number of devices which will be allocated for his application.
So, how could I make these two application run on the node and be allocated to different GPU devices? Could I allocate GPU devices for processes dynamically when it’s launched?

CUDA_VISIBLE_DEVICES

[url]Programming Guide :: CUDA Toolkit Documentation