glxinfo still shows llvmpipe or incomplete Nvidia while CUDA is working perfectly

Hello,
I’m trying to make OpenGL work on the Titan X installed on this Ubuntu 18.04 workstation but until now I had no success. CUDA programs work as expected:

nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.50       Driver Version: 430.50       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  TITAN Xp            On   | 00000000:3B:00.0 Off |                  N/A |
| 32%   54C    P2   119W / 250W |   1437MiB / 12196MiB |     56%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0     20686      C   python                                       713MiB |
|    0     20704      C   python                                       713MiB |
+-----------------------------------------------------------------------------+

The Ubuntu install is headless, so I need to start X server before

`startx`

.

Once I do this, and then I try (from SSH if it matters):

DISPLAY=:0 glxinfo | grep OpenGL
OpenGL vendor string: VMware, Inc.
OpenGL renderer string: llvmpipe (LLVM 8.0, 256 bits)
OpenGL core profile version string: 3.3 (Core Profile) Mesa 19.0.8
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.1 Mesa 19.0.8
OpenGL shading language version string: 1.40
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.0 Mesa 19.0.8
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.00
OpenGL ES profile extensions:

I get the mesa vendor and not the Nvidia One.

I also tried to force the vendor:

DISPLAY=:0 _GLX_VENDOR_LIBRARY_NAME=nvidia glxgears
name of display: :0
display: :0  screen: 0
direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose)
server glx vendor string: SGI
server glx version string: 1.4
server glx extensions:
    GLX_ARB_context_flush_control, GLX_ARB_create_context,
    GLX_ARB_create_context_profile, GLX_ARB_fbconfig_float,
    GLX_ARB_framebuffer_sRGB, GLX_ARB_multisample,
    GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile,
    GLX_EXT_fbconfig_packed_float, GLX_EXT_framebuffer_sRGB,
    GLX_EXT_import_context, GLX_EXT_libglvnd, GLX_EXT_texture_from_pixmap,
    GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_copy_sub_buffer,
    GLX_OML_swap_method, GLX_SGIS_multisample, GLX_SGIX_fbconfig,
    GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group, GLX_SGI_make_current_read
client glx vendor string: NVIDIA Corporation
client glx version string: 1.4
client glx extensions:
    GLX_ARB_context_flush_control, GLX_ARB_create_context,
    GLX_ARB_create_context_no_error, GLX_ARB_create_context_profile,
    GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float,
    GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_buffer_age,
    GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile,
    GLX_EXT_fbconfig_packed_float, GLX_EXT_framebuffer_sRGB,
    GLX_EXT_import_context, GLX_EXT_stereo_tree, GLX_EXT_swap_control,
    GLX_EXT_swap_control_tear, GLX_EXT_texture_from_pixmap,
    GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_NV_copy_buffer,
    GLX_NV_copy_image, GLX_NV_delay_before_swap, GLX_NV_float_buffer,
    GLX_NV_multisample_coverage, GLX_NV_present_video,
    GLX_NV_robustness_video_memory_purge, GLX_NV_swap_group,
    GLX_NV_video_capture, GLX_NV_video_out, GLX_SGIX_fbconfig,
    GLX_SGIX_pbuffer, GLX_SGI_swap_control, GLX_SGI_video_sync
GLX version: 1.4
GLX extensions:
    GLX_ARB_context_flush_control, GLX_ARB_create_context,
    GLX_ARB_create_context_profile, GLX_ARB_fbconfig_float,
    GLX_ARB_get_proc_address, GLX_ARB_multisample,
    GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile,
    GLX_EXT_fbconfig_packed_float, GLX_EXT_framebuffer_sRGB,
    GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info,
    GLX_EXT_visual_rating, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer
OpenGL vendor string:
OpenGL renderer string:
OpenGL version string:
OpenGL extensions:

But still it does not look correct.

In the Xorg log, I also notice this:

[   772.085] (II) AIGLX: Screen 0 is not DRI2 capable
[   772.085] (EE) AIGLX: reverting to software rendering

So I guess I’m still missing something.

Thanks in advance for your help.
nvidia-bug-report.log.gz (766 KB)

The monitor is connected to and the Xserver is running on the integrated matrox graphics. Please connect the monitor to the output on the nvidia card and disable the matrox in bios, if possible.

Ehi generix, thanks for your fast answer. Theoretically I have no access to the workstation and it is supposed to be used headless.

I’ll ask to the IT guys if this is the only way, but can I bind the X server to the Nvidia card without the need of physically reconnecting the monitor?

Ok, this can also be done by setting some options in xorg.conf. How do you access the screen then if not locally?

I access the machine via SSH. In particular what I’m trying to achieve is native rendering of a Paraview pvserver running on this workstation (in such a way I can then connect with a client to it).

Reference: Setting up a ParaView Server - KitwarePublic

Other use cases: X Forwarding + VirtualGL.

You could use this xorg.conf:

Section "Monitor"
    Identifier     "Monitor0"
    HorizSync       31.0 - 70.0
    VertRefresh     60
EndSection

Section "Device"
    Identifier     "nvidia"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BusID          "PCI:59:0:0"
    Option         "ConnectedMonitor" "DFP-1"
EndSection

Section "Screen"
    Identifier     "nvidia"
    Device         "nvidia"
    Monitor        "Monitor0"
    SubSection     "Display"
        Virtual     1920 1080
    EndSubSection
EndSection

It fakes a monitor.

Edit: if you just need an Xserver running for virtualgl, this xorg.conf should suffice:

Section "Device"
    Identifier     "nvidia"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BusID          "PCI:59:0:0"
    Option         "AllowEmptyInitialConfiguration"
EndSection

Thanks @generix. Your “Edit” configuration works as expected.

I have a last request, with the hope of not abusing of your kindness, I’d like to keep the possibility to have the Matrox Adapter to show the WM on a physical monitor when the display manager is started. Could you help me having a xorg.conf that allows two displays (or equivalent configuration):

  1. Matrox > To display the WM on the physical monitor
  2. Nvidia > To perform (heavy) OpenGL tasks.

https://devtalk.nvidia.com/default/topic/1060997/linux/not-able-to-update-tesla-p100-driver-384-to-418/post/5399455/#5399455