My Vulkan application runs on a multi-GPU, multi-user machine. Since this machine is multi user, I want to confine all my resource usage to one GPU (say GPU 7). With CUDA, I can achieve this through CUDA_VISIBLE_DEVICES. However with Vulkan, performing any initialization work requires a VkInstance, and creating this VkInstance makes a small allocation on GPU 0 (shown in nvidia-smi), regardless of whatever VkDevices I plan on creating afterwards.
This isn’t an issue with EGL - creating a context in EGL only uses memory on the target GPU, so I don’t think this is an inherent driver limitation. Currently I’m blocked from upgrading a larger system’s renderer from a legacy opengl solution to a much faster vulkan renderer due to this one annoying issue.
I’d be happy with something as simple as a VULKAN_VISIBLE_DEVICES type solution, or some way to move the VkInstance memory to the target device after the VkDevice is created.