EGL Eye: OpenGL Visualization without an X Server

Originally published at: https://developer.nvidia.com/blog/egl-eye-opengl-visualization-without-x-server/

If you’re like me, you have a GPU-accelerated in-situ visualization toolkit that you need to run on the latest-generation supercomputer. Or maybe you have a fantastic OpenGL application that you want to deploy on a server farm for offline rendering. Even though you have access to all that amazing GPU power, you’re often out of…

Hi Peter

If my understanding is correct, this should work on a EC2 ubuntu server which doesn't have any monitor attached. However, when I run your code I am getting the below error

libEGL warning: DRI2: xcb_connect failed
libEGL warning: DRI2: xcb_connect failed
libEGL warning: GLX: XOpenDisplay failed

Could you let me know how to solve it?

Thanks,
Jason

I'm getting the same error as reported by jasjuang too on ec2. Some information / documentation / trivial working application sample would be really useful.

I got a crash during context creation. When I backtracked, I got very interesting list of libs:

* /usr/lib/nvidia-361/libnvidia-eglcore.so.361.42
* /usr/lib/nvidia-361/libGLESv1_CM_nvidia.so.1
* /usr/lib/nvidia-361/libEGL_nvidia.so.0

The app were linked with libs (nvidia-361.42-0ubuntu2):
-L/usr/lib/nvidia-361 -lGL -L/usr/lib/nvidia-361 -lEGL

Or even thought EGL_OPENGL_API were requested, OpenGL ES 1 code were executed.

if you could tarball what you have and send it my way (or post it), much appreciated. I'll be able to compare with what I am doing and why it doesnt work for me!

Hi Peter. I just tried out EGL for Desktop on Linux (OpenGL 4.3) .Are you aware that NVIDIA Video Codec SDK doesn't work with this setup? The NVENC encoder crashes with NV_ENC_ERR_UNSUPPORTED_DEVICE when OpenGL context is created via EGL.

Hi Michael. NVENC should indeed work together with EGL. What GPU and driver are you using? Does your code work on the same system with an OpenGL context managed by X? And can you say a few more words about your application e.g. how does it create the context, how do you set up the CUDA interop to get the OpenGL buffer to NVENC etc? Thanks, Peter

Well, your guys at NvPipe said it shouldn't work. I am on 384 driver, Ubuntu 16.04. I can't really put here the whole details of my app. But it works fine when I use glx to create gl context. But when I try EGL ,NVENC throws unsupported device error upon encoder init. I do use GL to interop with nvenc. I also setup CUDA context because on Linux I am trying to use ABGR format directly. Thanks.

What my colleagues in the other thread mentioned is not that NVENC doesn't work with EGL, but rather that the convenience routines to compress OpenGL buffers directly are only supported for GLX created contexts. So if you're using an EGL managed context, you will need to use the legacy path: Map your OpenGL buffer via OpenGL/CUDA interop into CUDA and then compress the CUDA buffer with NVENC. Hope this clarifies the situation.

This is wonderfull thing. I have been waiting ages for such support.

I've been able to create EGL context and render stuff using nVidia drivers 384 (and 367 before) on debian/stretch.
However, I have to glFlush() the rendering pipeline each ~100,000 triangles to be able to read proper pixels from the output. Not to mention the need to glFinish() before calling glReadPixels(). Otherwise I'm getting garbage in the output image (or nothing at all). GL layer reports no error.
Is this some inherent problem with the driver/libraries or I am doing something wrong? Any pointers?

Nice post. But where can I find the EGL headers ?
The khronos "eglplatform.h" still needs X11: https://www.khronos.org/reg...
Line 116
I still can't compile my program without X11

Were you able to resolve the issue? We are facing a similar issue after creating EGL context. My email is mohit.juneja@gmail.com

I managed to hack function simple function that waits till everything is finished, see waitForGL in the following file:
https://github.com/Melown/l...

It basically creates sync object (fence) and waits when it is signalled (I do it in a loop and wait for half a second because I want to see that the program is alive in the log). Then, calls glFinish() twice.

This is the only way how I managed to make sure that everything is processed and I can safely grab framebuffer content.

Does this work with recent drivers on Ubuntu 16.04? If not, what driver version and OS do you use?

I've tried to use this, and it mostly works, wrapping around an existing OpenGL application, except for one thing: glGenFrameBuffers() fails. It always returns 0. In fact, I've had it segfault. Any idea what could cause this?

(Titan XP, driver version 418.56)