I develop a Linux GLX/OpenGL interposer called VirtualGL, which can be used to provide GPU acceleration for OpenGL applications running in Linux remote display environments such as VNC or NX. Normally and historically, VirtualGL intercepts GLX commands from the OpenGL application, redirects them to the GPU-attached X server in the host (which we call the “3D X server”), and modifies the GLX commands such that OpenGL rendering intended for an X window is redirected into a Pbuffer on the 3D X server. When certain synchronization commands are called by the application (glXSwapBuffers() if rendering to the back buffer; or glFinish(), glXWaitGL(), and sometimes glFlush() if rendering to the front buffer), VirtualGL reads back the OpenGL-rendered pixels and transports them to the appropriate place in the application’s X window, thus eliminating the need for the remote display environment to inherently support OpenGL (even if the remote display environment does inherently support OpenGL, the built-in implementation is unaccelerated.)
I am working on an EGL device back end for VirtualGL, which translates GLX commands into EGL commands (using the EGL_EXT_platform_device extension) and thus eliminates the need for a 3D X server. The problem is that EGL doesn’t support multi-view/multi-buffer Pbuffers, so it is necessary for me to use FBOs with either RBOs or render textures in order to emulate the functionality of a GLX/OpenGL window (specifically, double buffering/multi-buffering) using an EGL Pbuffer surface. The problem I’m encountering is that, in order to fully emulate the functionality of a GLX/OpenGL window, the RBO or texture has to be shareable among multiple OpenGL contexts. I’ve been googling like a demon for two days, and I’m getting conflicting information regarding whether that is possible. Sharing the FBO container definitely does not appear to be possible, but I’ve read in several places that it should be possible to share the RBO or render texture. However, I can’t make that work. Unfortunately, sharing the context at the EGL level isn’t possible, because since VGL is an interposer, sharing of OpenGL contexts is controlled by the OpenGL application. So what I’m looking for is a way to share RBOs or textures among EGL contexts, even if the EGL context was not created as a shared context. Without that ability, the contents of the OpenGL window would-- from the OpenGL application’s point of view-- be specific to a particular context, and that is not conformant with the GLX spec.
Would immutable textures do this for me? That’s the only thing I think I haven’t tried yet, but I am open to any ideas at this point.
I haven’t tested this, but I think you could probably make it work by creating an EGLImage from a texture and importing that to all the other relevant contexts. If that doesn’t work, you should be able to do the equivalent by creating a memory object in Vulkan and using that as the backing store for a GL texture in each context. If you run into issues with this, let me know.
VirtualGL has to reroute OpenGL rendering that would normally be intended for a window into an off-screen buffer. In this case, because multi-view/multi-buffer Pbuffers aren’t a thing in EGL, that means rerouting rendering into an RBO or texture bound to an EGL Pbuffer surface (the RBO or texture is the size of the OpenGL window, the Pbuffer is just a dummy 1x1 surface.) However, an OpenGL application is well within its rights to use multiple contexts to render into the same window. That is the functionality I am trying unsuccessfully to duplicate. I don’t just need to share the rendered image. I need to share the rendering target (the actual buffer or texture) so that multiple contexts can potentially be used by the application to render into it. I don’t think I can get there with EGLImages, because those are also context-specific. Also, VirtualGL has to support multisampling, but EGLImages don’t. It’s also unclear how exactly EGLImages are implemented, but I strongly suspect that they would introduce undesirable overhead, such as pixel copying, behind the scenes. I am asserting that EGLImages are context-specific based on the man page for eglCreateImage(), which indicates that it is an error to call that function with target set to EGL_GL_TEXTURE_2D or EGL_GL_RENDERBUFFER when context is set to EGL_NO_CONTEXT.
The basic problem is that I’m trying to emulate, using OpenGL, functionality that GLX provides at a higher level than OpenGL. With GLX, multi-buffering attributes are part of the GLXFBConfig or X visual, so those attributes become part of the GLXDrawable. GLXDrawables are not tied to any specific context.
EGLImages aren’t context-specific. They can be created from a specific context’s texture, but the image handle itself is portable once created. EGLImages don’t introduce copies. They’re a container for shared memory. To support multisampling, you’d have to use private renderbuffers for the multisample portion of each image and resolve it down to a shared single-sample buffer at flush or swapbuffers time. I suspect other GLX drivers do this anyway when sharing a window between multiple contexts. Ours doesn’t though.
If EGLImage doesn’t work, there’s always the Vulkan interop path I mentioned.
Finally got a chance to look into it, but whereas I can easily get the pixels from a renderbuffer or texture into an EGLImage, I can’t figure out how to get the pixels back out of the EGLImage into a renderbuffer/texture in a different context. It seems that there are a couple of OpenGL ES extensions that would enable that, but nothing for regular OpenGL.
Barring the ability to use EGLImages for this, I would also appreciate an example of how to create a texture or renderbuffer from Vulkan-allocated memory. I haven’t been able to find one.
I did stumble upon https://www.khronos.org/registry/OpenGL/extensions/EXT/EXT_EGL_image_storage.txt, which would seemingly allow me to create a texture from an EGLImage in desktop OpenGL. However, if the OpenGL 4.2 requirement of that extension means that it requires an OpenGL 4.2+ profile, then that’s a problem. Emulating an OpenGL window using FBOs means that VirtualGL has to create and maintain those FBOs within the OpenGL contexts that the 3D application creates, and there is no guarantee that the 3D application is going to create a context compatible with OpenGL 4.2. I basically can’t impose any requirement beyond OpenGL 3.0 behind the scenes, because unfortunately, there are still commercial applications out there that use the OpenGL 2.x features that were removed in OpenGL 3.1. The concept of this EGL back end for VGL was to act as a bridge between legacy GLX/OpenGL applications and the newer EGL API, which gives us the ability to access the GPU through a device file (and potentially other methods, such as a Wayland compositor, at some point in the future.) But that means that any newer OpenGL functionality requirements have to be hidden from the application. If I could eventually make the EGL back end as compatible as the GLX back end, then it would be nice to remove the latter, but that’s a huge “if” at the moment. EGL just doesn’t do as much yet.
Furthermore, the more I dig into the EGLImage approach, the more issues I uncover. Multisampling is still a problem, because-- bearing in mind that VGL is essentially a GLX implementation-- it has to allow for GLX contexts and GLX drawables to be independent. A 3D application is well within its rights to use multiple GLX contexts with the same drawable, or to use multiple GLX drawables with the same context, without ever swapping the buffers. So, to the best of my understanding, from VGL’s point of view, the multi-sample render target has to be part of the GLX drawable object, and that render target has to survive as part of that object-- in multi-sampled form-- even if the 3D application changes the context. I can’t just render the multi-sample buffer down into a single-sample buffer when the context changes, because I don’t know whether the 3D application will want to do more rendering to the multi-sample buffer later on. Thus, I can’t see how to make EGLImages do what I need them to do, since they don’t support GL_TEXTURE_2D_MULTISAMPLE.
I am open to using Vulkan, but I still can’t figure out how to do what you propose, and online documentation seems somewhat scant on that topic.
BTW, this would all be academic if an EGL multiview Pbuffer extension existed. Ultimately it just seems like trying to emulate the properties of a non-context-specific drawable with context-specific textures or RBOs is asking for trouble. Even if I could get past this immediate roadblock, making all of this mess pass conformance is still going to be a nightmare and a half. I can’t afford to go down any more rabbit holes at this point. I need to get this feature to proof-of-concept stage so I can get paid for my months of work on it thus far.
I don’t know what other drivers do, but all the OpenGL 2.x contexts will actually be 4.x contexts with the compatibility profile on our side, so you won’t have problems there.
I’d mentioned the multisampling issues above. I don’t know if any real appswill actually run into the issues you mention, but it will be different behavior than our GLX driver provides. I dont know whether other GLX drivers adhere this closely to the spec.
The only documentation for the Vulkan features I’m aware of is the spec. I don’t have any sample code handy. This is certainly the most robust solution though. Doesn’t actually require any Vulkan programming per-se, just some Vulkan boilerplate to allocate the memory proplerly and export it. Granted “Vulkan boilerplate” is probably a few hundred lines of code because… it’s Vulkan.
Multiple contexts rendering to the same drawable might be a bigger problem than just making an EGLSurface handle that’s analogous to a window.
In GLX, you can have multiple GLXContexts on multiple threads bound to the same GLXDrawable at the same time. You could also have yet another thread without a current drawable (or with a different current drawable) calling glXSwapBuffers.
You could even have contexts from different clients in different processes, though I’m guessing that’s outside the scope of what VirtualGL would implement.
In EGL, each EGLSurface can be current to at most one thread at a time, and only that thread is allowed to call eglSwapBuffers. You can still use the same EGLSurface with multiple EGLContexts, but you can’t use them at the same time.
The limitation of eglSwapBuffers() is irrelevant if we’re forced to emulate double buffering, because VirtualGL will emulate glXSwapBuffers() by simply swapping pointers to the front and back buffer objects (EGLImages, FBOs, Vulkan buffers, or whatever I can make work.) It won’t ever call eglSwapBuffers(). My proposal to nVidia was to create a new extension (EGL_EXT_multiview_pbuffer) that basically operates like EGL_EXT_multiview_window but for off-screen EGLSurfaces. But does EGL_EXT_multiview_window even use eglSwapBuffers()? If not, then such an extension would still seem like a viable idea, although I understand that it will take some effort to develop.
The other idea I had since posting my last comment in this thread was to emulate double buffering with multiple single-buffered Pbuffers. That has some potential thorniness as well, vis-a-vis how to emulate features such as GL_FRONT_AND_BACK, but it seems less thorny than anything else on the table at the moment.
However, the limitation on multiple threads rendering to the same EGLSurface would still apply. The EGL back end feature for VirtualGL is being driven by the visualization supercomputer crowd more so than the corporate crowd, so I get the impression that the type of applications they will need to support with it is much more narrow in scope. (Visualization applications tend to be better behaved in their usage of OpenGL than, say, commercial CAD applications.) The corporate crowd is more concerned about compatibility across a wide variety of applications, so there are other reasons why the EGL back end might not be a good fit for them-- because it will be difficult or impossible to emulate certain GLX features with it, at least at first (accumulation and aux buffers and GLX_EXT_import_context are probably impossible, and glXCopyContext() and GLX_EXT_texture_from_pixmap are likely to require significant effort.) However, the visualization crowd is more concerned about minimizing system resources (hence their desire to avoid the 3D X server and all of its dependencies.) The number of applications that actually use multiple threads to render to the same window should be low or possibly even zero, because historically, not all drivers reacted well to that. Whereas VirtualGL fully supports one-thread-per-drawable multi-threaded rendering, the multiple-threads-per-drawable scenario has received little or no testing in the ten years since I’ve been developing VirtualGL independently. I can’t really even guarantee that it works. The complete and utter silence I’ve heard regarding that may mean that it works fine, or it may mean that literally no one has tried it. I suspect the latter.
VirtualGL can handle multiple processes rendering to the same drawable simultaneously, but there again, that’s a poorly-tested path, and I doubt that many applications are doing that these days.
GLX_EXT_import_context and GLX_EXT_texture_from_pixmap are probably both safe to just not support – the former only works with indirect rendering contexts, and I think the latter is mainly used for compositing window managers.
In any case, if you don’t need to support multiple threads drawing to the surface at the same time, then what sort of semantics do you need? In particular, do you need to support using glDrawBuffer calls with GL_FRONT or GL_FRONT_AND_BACK, or can you assume that the application is well-behaved enough to only draw to the back buffer?
Also, do you need to preserve the buffer contents after the app calls glXSwapBuffers?
Some people do use VirtualGL to run compositing WMs, so I could see there being some demand for GLX_EXT_texture_from_pixmap emulation. However, I consider that to be an advanced topic for the VirtualGL EGL back end, and if someone wants that extension badly enough, they’ll need to pay me to implement it after the rest of the feature is complete.
glDrawBuffer(GL_FRONT) has to work for sure. Some applications use that. glDrawBuffer(GL_FRONT_AND_BACK) is, as indicated above, a potential pitfall. Probably few applications use that, but it’s enough of a core feature that it needs to work (although it doesn’t necessarily need to work with optimal performance.) The main reason it needs to work is so VirtualGL’s own conformance test (fakerut) will pass without modification. glCopyContext() needs to work for the same reason.
The specification for glXSwapBuffers() indicates that the back buffer becomes undefined after a swap, so I see no particular need to preserve it in VirtualGL’s implementation. Since VGL interposes the glXSwapBuffers() function, it reads out the contents of the back buffer and transports it prior to swapping the buffer.
“Indirect rendering” doesn’t have anything to do with the glDraw*Indirect calls. It refers to when the application doesn’t talk to the hardware directly, and instead sends all of its GL and GLX commands across the wire in X11 requests. It’s what you’d use if the X server was running on a different computer than the application.
Unfortunately, my idea of using single-buffered Pbuffers to emulate double and quad buffering did not pan out, for reasons described here: Access the GPU without going through an X server · Issue #10 · VirtualGL/virtualgl · GitHub. Basically, it would have proven much nastier than an FBO-based solution (which is itself already nasty.) Given that I’ve spent 130 hours of speculative (uncompensated) labor working on this feature, I just don’t have any resources to continue working on it unless (a) a straightforward method emerges to make the contents of an RBO persist across multiple OpenGL contexts, or (b) a multi-buffer Pbuffer extension emerges for EGL. Barring either of those solutions, I have no confidence that an EGL back end for VirtualGL is even possible, and without that confidence, I am unwilling to spend more uncompensated labor on the feature.
Not that I’m implying you need to work on this, but for posterity I just wanted to again note that the Vulkan interop method suggested several times earlier in this thread solves all the known issues you’ve listed and doesn’t require any new EGL or GL extensions.
OK, so you’re confident that Vulkan can solve the problem, but how am I supposed to develop that same confidence unless I can see some low-level example of that? Without that example, it isn’t a “method.” It’s just an idea.
The best way to become confident would be to read the specs and start writing code rather than continually requesting we do it for you. Then you’d have both example code, and a solution.
I’ve spent hours looking at the Vulkan spec and googling around for solutions. If you’re so confident that it can work, then surely you can at least point me to specific Vulkan API functions that I might be able to leverage.