Sharing render buffers or render textures among multiple OpenGL contexts

https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VK_KHR_external_memory_fd
https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VK_KHR_external_memory
https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VK_KHR_external_memory_capabilities

The above 3 extension appendices should contain links to all the Vulkan extension functionality needed to allocate memory for GL interop from Vulkan. vkAllocateMemory() will ultimately be the function that allocates the memory object.

https://www.khronos.org/registry/OpenGL/extensions/EXT/EXT_external_objects.txt
https://www.khronos.org/registry/OpenGL/extensions/EXT/EXT_external_objects_fd.txt

The above specs contain the GL APIs to import that memory and create textures from it. The textures can then be bound to FBOs. The memory can be imported to multiple GL contexts, bound to textures in each, and each of those views should remain layout-coherent, meaning content rendered in one will appear in the others as long as the memory backing them is the same. These specs also contain the docs for synchronization of these surfaces using Vulkan primitives, though they can be synchronized from GL to GL as well once Vulkan allocates the requisite primitives and exports them.

https://developer.nvidia.com/getting-vulkan-ready-vr

This blog post contains some sample code going the other direction, with direct3D allocating resources that are then imported to Vulkan using rather preliminary NV-only versions of the above KHR extensions, but nevertheless, the sample code there may be useful in developing an understanding of the general workflow I suppose.

OK, thank you. I feel somewhat vindicated, given that extensions to both Vulkan and OpenGL are required in order to make this work. I’m not sure how I was supposed to know that or to know which extensions to look for. Now that I have more specific targets for my googling, I was able to find an example.

Next question:

Bearing in mind that I’m using EGL_EXT_platform_device, each GPU in the system is uniquely identified by its DRI device path (e.g. /dev/dri/card0), and users specify which GPU to use by setting VGL_DISPLAY to that device path. So how do I ensure that Vulkan is using the same device?

Apologies for not making the extensions required clear enough. Device correlation was one of the big challenges when developing them. Here’s some untested pseudo-code to accomplish that which should just about compile:

Assuming you already have a GL or GLES context current (you’ll have to create a dummy one if not for now):

// numDevices should be 1 unless using Xinerama or SLI.  If >1, things get really complex.
glGetIntegerv(GL_NUM_DEVICE_UUIDS_EXT, &numDevices);

GLubyte glDevUUID[GL_UUID_SIZE_EXT];
glGetUnsignedBytei_vEXT(GL_DEVICE_UUID_EXT, 0 /* first device */, &glDevUUID[0]);

for each Vulkan physical device:

VkPhysicalDeviceIDPropertiesKHR idProps = { VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_ID_PROPERTIES_KHR, 0 };
VkPhysicalDeviceProperties2KHR devProps = { VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_PROPERTIES_2_KHR, &idProps };
vkGetPhysicalDeviceProperties2KHR(physDevice, &devProps);

if (!memcmp(&devProps.deviceUUID[0], &glDevUUID[0], sizeof(glDevUUID))) {
    myVkDev = physDevice;
    // Init Vulkan and create allocations on this device
}

To be fully correct, you should also compare the driverUUID and GL_DRIVER_UUID_EXT between the Vulkan device and GL context as well using the same workflow. Sharing allocations between incompatible driver versions results in undefined behavior.

If creating a GL or GLES context is onerous for some reason, in theory we could add an EGL_DEVICE_UUID property to the EGLDevice itself. It would be a bit complicated because I believe one EGLDevice is defined to represent an entire SLI group, similar to a GL context, but it could return an array of UUIDs just like GL does.

Thanks. Creating a temp context isn’t an issue, so that should work.

Was anyone able to get any example code or snippets for this up and running? I’m still banging my head against the wall on this one. I’m mostly just interested in being able to share textures across process boundaries, and it seems truly insane that this is the only non-dead-end way I’ve found so far. Just having trouble making any serious inroads to this.

I finally put out enough fires to be able to work on this some more, but unfortunately, I’m stuck trying to make Vulkan work properly. Even though vulkaninfo works properly, in my code running on the same machine, vkEnumeratePhysicalDevices() returns VK_ERROR_INITIALIZATION_FAILED. I’ve copied the literal initialization code from vulkaninfo into my class within VirtualGL, and the same code that works in vulkaninfo fails in VirtualGL. No clue how to proceed. I’m wondering if maybe Vulkan has to be initialized prior to EGL or something silly like that.

Further research suggests that Vulkan is trying to establish an X connection within the body of vkCreateInstance(), and it’s doing so using the DISPLAY environment variable. Obviously that won’t work with VirtualGL for two reasons:

  1. In VirtualGL, the DISPLAY environment variable points to the 2D X server, and VirtualGL’s entire raison d’etre is to prevent 3D rendering from occurring on the 2D X server.
  2. The entire raison d’etre of the EGL back end is to prevent the need for a GPU-attached X server, but Vulkan seems to require a GPU-attached X server.

Unless there is a way around that, Vulkan is a non-starter.

For those who may happen to stumble upon this thread, I ultimately solved the problem using RBOs and shared contexts, albeit with quite a bit of complexity. The key piece of information I didn’t clue into at first is that OpenGL context sharing is transitive. Thus, I was able to create a dedicated “RBO context” on which to hang all of the RBOs and share it with any unshared contexts that the 3D application requests to create. Since the first shared context that the 3D application requests to create has to be shared with an unshared context, ultimately all contexts are shared with the RBO context. The RBO context is temporarily made current when creating a Pbuffer (either explicitly, in the body of glXCreatePbuffer(), or implicitly, when VirtualGL creates a Pbuffer to emulate an OpenGL window.) The complexity mostly has to do with managing both a “real” and a “fake” set of FBO and draw/read buffer bindings so as to make the 3D application believe that it’s rendering into the default framebuffer rather than an FBO, as well as managing a fake set of GLXFBConfigs that expose multi-buffering capabilities to the application even though technically EGL Pbuffers don’t have those capabilities. Essentially, I ended up developing most of a full GLX implementation on top of EGL, but it works-- at least as far as the VirtualGL unit tests are concerned. As far as I can tell, this approach should work-- at least in the scope of functionality that VirtualGL supports-- since shared OpenGL contexts only share their objects, and those objects are all given a globally unique ID. The code is available in the VirtualGL dev branch on GitHub if anyone wants to look at it. Pre-release builds are also available for testing at VirtualGL | DeveloperInfo / Pre-Release Builds/Continuous Integration (3.0/evolving).