Sharing render buffers or render textures among multiple OpenGL contexts

The above 3 extension appendices should contain links to all the Vulkan extension functionality needed to allocate memory for GL interop from Vulkan. vkAllocateMemory() will ultimately be the function that allocates the memory object.

The above specs contain the GL APIs to import that memory and create textures from it. The textures can then be bound to FBOs. The memory can be imported to multiple GL contexts, bound to textures in each, and each of those views should remain layout-coherent, meaning content rendered in one will appear in the others as long as the memory backing them is the same. These specs also contain the docs for synchronization of these surfaces using Vulkan primitives, though they can be synchronized from GL to GL as well once Vulkan allocates the requisite primitives and exports them.

This blog post contains some sample code going the other direction, with direct3D allocating resources that are then imported to Vulkan using rather preliminary NV-only versions of the above KHR extensions, but nevertheless, the sample code there may be useful in developing an understanding of the general workflow I suppose.

OK, thank you. I feel somewhat vindicated, given that extensions to both Vulkan and OpenGL are required in order to make this work. I’m not sure how I was supposed to know that or to know which extensions to look for. Now that I have more specific targets for my googling, I was able to find an example.

Next question:

Bearing in mind that I’m using EGL_EXT_platform_device, each GPU in the system is uniquely identified by its DRI device path (e.g. /dev/dri/card0), and users specify which GPU to use by setting VGL_DISPLAY to that device path. So how do I ensure that Vulkan is using the same device?

Apologies for not making the extensions required clear enough. Device correlation was one of the big challenges when developing them. Here’s some untested pseudo-code to accomplish that which should just about compile:

Assuming you already have a GL or GLES context current (you’ll have to create a dummy one if not for now):

// numDevices should be 1 unless using Xinerama or SLI.  If >1, things get really complex.
glGetIntegerv(GL_NUM_DEVICE_UUIDS_EXT, &numDevices);

glGetUnsignedBytei_vEXT(GL_DEVICE_UUID_EXT, 0 /* first device */, &glDevUUID[0]);

for each Vulkan physical device:

VkPhysicalDeviceProperties2KHR devProps = { VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_PROPERTIES_2_KHR, &idProps };
vkGetPhysicalDeviceProperties2KHR(physDevice, &devProps);

if (!memcmp(&devProps.deviceUUID[0], &glDevUUID[0], sizeof(glDevUUID))) {
    myVkDev = physDevice;
    // Init Vulkan and create allocations on this device

To be fully correct, you should also compare the driverUUID and GL_DRIVER_UUID_EXT between the Vulkan device and GL context as well using the same workflow. Sharing allocations between incompatible driver versions results in undefined behavior.

If creating a GL or GLES context is onerous for some reason, in theory we could add an EGL_DEVICE_UUID property to the EGLDevice itself. It would be a bit complicated because I believe one EGLDevice is defined to represent an entire SLI group, similar to a GL context, but it could return an array of UUIDs just like GL does.

Thanks. Creating a temp context isn’t an issue, so that should work.

Was anyone able to get any example code or snippets for this up and running? I’m still banging my head against the wall on this one. I’m mostly just interested in being able to share textures across process boundaries, and it seems truly insane that this is the only non-dead-end way I’ve found so far. Just having trouble making any serious inroads to this.

I finally put out enough fires to be able to work on this some more, but unfortunately, I’m stuck trying to make Vulkan work properly. Even though vulkaninfo works properly, in my code running on the same machine, vkEnumeratePhysicalDevices() returns VK_ERROR_INITIALIZATION_FAILED. I’ve copied the literal initialization code from vulkaninfo into my class within VirtualGL, and the same code that works in vulkaninfo fails in VirtualGL. No clue how to proceed. I’m wondering if maybe Vulkan has to be initialized prior to EGL or something silly like that.

Further research suggests that Vulkan is trying to establish an X connection within the body of vkCreateInstance(), and it’s doing so using the DISPLAY environment variable. Obviously that won’t work with VirtualGL for two reasons:

  1. In VirtualGL, the DISPLAY environment variable points to the 2D X server, and VirtualGL’s entire raison d’etre is to prevent 3D rendering from occurring on the 2D X server.
  2. The entire raison d’etre of the EGL back end is to prevent the need for a GPU-attached X server, but Vulkan seems to require a GPU-attached X server.

Unless there is a way around that, Vulkan is a non-starter.