Installing OpenGL causes infinite login prompt

Hi I have installed the CUDA toolkit using After reboot, I am not able to login. I am using Ubuntu 16.04

This topic is discussed in a variety of other forum posts on this forum. Reading some of these may be helpful.

I have installed ‘’ as instructed at (third post from top <3>)

and this resolved the infinite login loop issue.

But when I try to compile gst-plugins-bad-1.8.3, it gives me errors:

gstglcontext.c:607:22: error: ‘GL_CONTEXT_PROFILE_MASK’ undeclared (first use in this function)
GetIntegerv (GL_CONTEXT_PROFILE_MASK, &context_flags);
gstglcontext.c:607:22: note: each undeclared identifier is reported only once for each function it appears in
gstglcontext.c:608:29: error: ‘GL_CONTEXT_CORE_PROFILE_BIT’ undeclared (first use in this function)
if (context_flags & GL_CONTEXT_CORE_PROFILE_BIT)
gstglcontext.c:610:29: error: ‘GL_CONTEXT_COMPATIBILITY_PROFILE_BIT’ undeclared (first use in this function)

I am not sure how to resolve this.

Yes, the usual method to avoid the login loop is to not install certain OpenGL specific files. The login loop arises because the installation of these files by the NVIDIA driver breaks the X stack associated with the display running on the non-NVIDIA GPU (which obviously does not use NVIDIA provided OpenGL files).

If you then expect to compile and use OpenGL codes, you had better be sure to be using an OpenGL stack, libraries, and headers associated with your existing non-NVIDIA GPU, and things like CUDA/OpenGL interop probably won’t work in that scenario.

I haven’t researched it, but the above compile errors may simply be an incompatibility of what you are trying to compile with your non-NVIDIA OpenGL stack. A straightforward test would be to start over with a clean system that has no NVIDIA software installed, and attemtp to compile gst-plugins (whatever that is). If it does not compile in the same way, then your issue has nothing to do with NVIDIA software. If it does compile, then I would suspect that your build environment in the failing case is not correctly pointed at your pre-NVIDIA existing OGL system.


I have two graphics cards in my desktop right now ie. Intel and NVIDIA. But I am using Intel for displaying. Do I need to use NVIDIA Graphics card when I Install and use CUDA?

No, you can use the Intel card (for display – obviously you need to use the NVIDIA card for CUDA), and that is probably what you are doing now.

Ordinary OpenGL work in that scenario would use the Intel OpenGL stack, and run on your intel card. If your gst-plugins-bad-1.8.3 work depended on the NVIDIA GPU, that would not work. Depending on how you are compiling that may explain your results, but I’m just guessing here. For all I know it may work just fine on the Intel side too.

Things like CUDA/OpenGL interop would generally not work in that scenario, as it usually requires the CUDA context and OpenGL context to be on the same GPU.