While I never discovered a general way to run any OpenGL binary on a specific GPU in a headless (i.e. no X server) environment like we can with CUDA applications, OpenGL applications can leverage an SDL fork for this..
We now have a need to specify which GPU a vulkan application uses at runtime and
CUDA_VISIBLE_DEVICES doesn’t do this, they all still run on gpu 0. Is there a similar environment variable that does for vulkan applications what
CUDA_VISIBLE_DEVICES does for cuda applications? Using nvidia-docker to restrict GPU visibility isn’t always an option since many shared systems lack docker, ditto for SLURM. I gave the MESA approach a go, but that doesn’t work either (not surprising, since that is a different implementation than the one that ships with the nvidia proprietary driver).