Ubuntu 18.04 Uses LLVM instead of NVIDIA Drivers for OpenGL

I am managing an ubuntu server with several NVIDIA graphics cards. I would like to update OpengGL to the latest version, but cannot do that because the OpenGL render is LLVM. How do I change the render to NVIDIA? Please see more information about my system below. Thanks in advance!

$ nvidia-smi

±----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 TITAN X (Pascal) Off | 00000000:04:00.0 Off | N/A |
| 23% 23C P8 12W / 250W | 0MiB / 12196MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 1 TITAN X (Pascal) Off | 00000000:05:00.0 Off | N/A |
| 23% 25C P8 9W / 250W | 0MiB / 12196MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 2 GeForce RTX 208… Off | 00000000:08:00.0 Off | N/A |
| 30% 27C P8 18W / 250W | 0MiB / 11019MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 3 GeForce RTX 208… Off | 00000000:09:00.0 Off | N/A |
| 30% 24C P8 4W / 250W | 0MiB / 11019MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 4 TITAN X (Pascal) Off | 00000000:84:00.0 Off | N/A |
| 23% 26C P8 8W / 250W | 0MiB / 12196MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 5 TITAN X (Pascal) Off | 00000000:85:00.0 Off | N/A |
| 23% 20C P8 7W / 250W | 0MiB / 12196MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 6 TITAN X (Pascal) Off | 00000000:88:00.0 Off | N/A |
| 46% 77C P2 153W / 250W | 1417MiB / 12196MiB | 72% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 7 TITAN X (Pascal) Off | 00000000:89:00.0 Off | N/A |
| 49% 82C P2 198W / 250W | 1417MiB / 12196MiB | 79% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 6 N/A N/A 25830 C python3 1415MiB |
| 7 N/A N/A 25643 C python3 1415MiB |
±----------------------------------------------------------------------------+

$ glxinfo | grep -i opengl

OpenGL vendor string: VMware, Inc.
OpenGL renderer string: llvmpipe (LLVM 10.0.0, 256 bits)
OpenGL core profile version string: 3.3 (Core Profile) Mesa 20.0.8
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.1 Mesa 20.0.8
OpenGL shading language version string: 1.40
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.1 Mesa 20.0.8
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.10
OpenGL ES profile extensions:

$ glxinfo |grep render

direct rendering: Yes
GLX_MESA_multithread_makecurrent, GLX_MESA_query_renderer,
GLX_EXT_visual_rating, GLX_MESA_copy_sub_buffer, GLX_MESA_query_renderer,
Extended renderer info (GLX_MESA_query_renderer):
OpenGL renderer string: llvmpipe (LLVM 10.0.0, 256 bits)
GL_ARB_conditional_render_inverted, GL_ARB_conservative_depth,
GL_MESA_ycbcr_texture, GL_NV_conditional_render, GL_NV_depth_clamp,
GL_ARB_conditional_render_inverted, GL_ARB_conservative_depth,
GL_NV_conditional_render, GL_NV_depth_clamp, GL_NV_fog_distance,
GL_EXT_polygon_offset_clamp, GL_EXT_read_format_bgra, GL_EXT_render_snorm,
GL_MESA_shader_integer_functions, GL_NV_conditional_render,
GL_OES_element_index_uint, GL_OES_fbo_render_mipmap,

nvidia-settings

(nvidia-settings:25173): dbind-WARNING **: 20:46:01.957: Couldn’t register with accessibility bus: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.

ERROR: Unable to load info from any available system

(nvidia-settings:25173): GLib-GObject-CRITICAL **: 20:46:02.085: g_object_unref: assertion ‘G_IS_OBJECT (object)’ failed
** Message: 20:46:02.089: PRIME: No offloading required. Abort
** Message: 20:46:02.089: PRIME: is it supported? no

nvidia-bug-report.log.gz (3.4 MB)

In order to use GLX on an NVIDIA GPU, you need to have an active X screen on it. It looks like your X server is defaulting to using the ASPEED graphics device rather than one of the NVIDIA GPUs so no NVIDIA render is available to your GLX applications.

With a more modern X server, the server would automatically create “GPU screens” for the NVIDIA GPUs which would allow using those GPUs for GLX rendering using the new “PRIME” render offload support. Unfortunately, the X server on your system is too old to support that.

Alternatively, you could create an /etc/X11/xorg.conf file that creates a separate X screen per GPU. You can do that with nvidia-xconfig --enable-all-gpus.

Finally, if your application can use EGL instead of GLX and doesn’t need to display anything on a physical display device then you should be able to use the EGLDevice extensions to render directly on an NVIDIA GPU, bypassing the Xorg server.

Thanks so much for the quick reply! How should I update the X server for supporting the GLX rendering using the new PRIME render offload?

For Ubuntu, newer X servers are generally part of a “hardware enablement stack” set of packages. Please check Kernel/LTSEnablementStack - Ubuntu Wiki for more info.

Thanks again! My server has Intel Xeon CPU, which does not have integrated graphics. Will PRIME render offload work with ASPEED graphics device?

I’m not sure, sorry.

I guess it doesn’t. The aspeed and the matrox g200 server graphics drm drivers are very simple ones, not providing any prime (copy) functions which the modesetting driver relies on. At least the last time I’ve taken a look at those. So you would need to make use of virtualgl.
Since mr.plattner seems to be bored, does nvidia to nvidia prime offload work?
Having a lot of nvidia boards like the OP makes the question obvious.