With Ubuntu 16.04 running Linux 4.15.0-15, I can’t get a simple EGL program to run to completion. It appears that GLEW init fails when trying to get the GL_VERSION string.
I was able to get it working with 384 drivers, but I need 390+ to work with newish CUDA.
nvidia-bug-report.log.gz (190 KB)
tg.cpp (1.39 KB)
It turns out I had a pretty old GLEW version. Upon trying GLEW 2.1, the little test will run all the way through, but returns nullptr for the glGetString calls at the end. I think the glGetString function itself is borked, because if I do this:
using FunctionPtr = const char *(*)(int);
auto otherGetString = (FunctionPtr)eglGetProcAddress("glGetString");
if (!otherGetString) {
std::cout << "null glGetString" << std::endl;
} else {
auto *ver = otherGetString(GL_VERSION);
std::cout << ver << std::endl;
}
I get “4.6.0 NVIDIA 390.30”
Thoughts?
I have a few updates:
- I had installed nvidia’s driver from the web (390.48), but upon installing cuda 9.1 from nvidia’s site, it overwrote that and installed 390.30. I have since added a proprietary drivers PPA to my ubuntu repositories, installed 390.48, and now I appear to be up-to-date.
The code posted above prints:
4.6.0 NVIDIA 390.48
This points to GL functions not being correctly aliased? The addresses of glGetString and eglGetProcAddress(“glGetString”) differ.
@brian Good catch. Did you ever figure out why this was happening? Trying to fix the issue without having to call eglGetProcAddress() before every opengl call.
I’m still unsure why it happened. I am now running Fedora 28 (With CUDA 9.2/ 396 drivers), and haven’t seen the issue there.
Hi Brian,
It turns out that this is likely an Ubuntu packaging issue (assuming you’re using Ubuntu) – EGL works on Arch and Fedora, and you can get it to work on Ubuntu if you install the drivers from a .run file rather than via apt.