Hello, we have a dell r370 running Linux Ubuntu 14.04 LTS.
We are at a loss of what to do, and confused on how all of this should work. We are novices with GPU acceleration and CUDA.
Here is the setup of our graphics.
sudo lshw -C display
*-display description: 3D controller
product: GK104GL [Tesla K10]
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:05:00.0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress bus_master cap_list
configuration: driver=nvidia latency=0
resources: iomemory:33f0-33ef iomemory:33f0-33ef irq:31 memory:92000000-92ffffff memory:33fe0000000-33fefffffff memory:33ff0000000-33ff1ffffff
*-display
description: 3D controller
product: GK104GL [Tesla K10]
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:06:00.0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress bus_master cap_list
configuration: driver=nvidia latency=0
resources: iomemory:33f0-33ef iomemory:33f0-33ef irq:31 memory:91000000-91ffffff memory:33fc0000000-33fcfffffff memory:33fd0000000-33fd1ffffff
*-display UNCLAIMED
description: VGA compatible controller
product: G200eR2
vendor: Matrox Electronics Systems Ltd.
physical id: 0
bus info: pci@0000:0c:00.0
version: 01
width: 32 bits
clock: 33MHz
capabilities: pm vga_controller bus_master cap_list
configuration: latency=64 maxlatency=32 mingnt=16
resources: memory:90000000-90ffffff memory:93800000-93803fff memory:93000000-937fffff
We have a Matrox video card for display, and two Tesla K10 cards for acceleration.
The NVIDIA driver was manually installed. We downloaded a *.run file from the NVIDIA site and installed it:
sudo ./NVIDIA-Linux-x86_64-384.66.run
We have several windows clients connecting remotely to the server using freenx.
The issue we have is here:
glxinfo | grep Open
OpenGL vendor string: Mesa project: www.mesa3d.org
OpenGL renderer string: Mesa GLX Indirect
OpenGL core profile version string: 1.2 (1.5 Mesa 6.4.1)
OpenGL core profile extensions:
OpenGL version string: 1.2 (1.5 Mesa 6.4.1)
OpenGL extensions:
Also here are some select lines of output from the debugger:
LIBGL_DEBUG=verbose glxinfo |more
name of display: :2000.0
Warning: GL error 0x500 at line 922
display: :2000 screen: 0
direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose)
server glx vendor string: SGI
server glx version string: 1.2
server glx extensions:
GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_OML_swap_method, GLX_SGIS_multisample,
GLX_SGIX_fbconfig, GLX_SGIX_hyperpipe, GLX_SGIX_swap_barrier,
GLX_SGI_make_current_read
client glx vendor string: NVIDIA Corporation
client glx version string: 1.4
…
We run commands like glxgears successfully.
However, For OpenGL version string, we don’t have NVIDIA listed. Also, the version string listed, 1.2, is so low that we can’t use software applications requiring OpenGL version string 1.4 or higher.
Trying unset ALWAYS INDIRECT,unset LIBGL_ALWAYS_INDIRECT, has no affect.
-
We don’t have NVIDIA OpenL version string, despite it already installed.
-
Even if we wanted to use Mesa, only version string installed is 1.2. We have another Ubuntu server running 14.04 with a generic Matrox card but NO additional GPU’s, and the OpenGL string version is t 2.1 and Mesa 10.1.3
The goal is to use the VGA card just for display and the Tesla cards for GPU acceleration only. And run software applications on the server where the OpenGL string version is at least 1.4 or higher. Not 1.2.
Any suggestions, advice, input would be much appreciated.
Thanks.
green
nvidia-bug-report.log (980 KB)