libGL.so libGLU.so missing in nvidia-driver:470 / 470-dkms network installer

Hi,

I am missing libraries libGL.so and libGLU.so (and probably more) after installing the nvidia drivers.

I followed the linux installation guide to install nvidia-driver:470 and cuda-11-4 on my Rocky 8 system. I am using the package-manager version with network packages approach. I added the cuda-rhel8 repo and then installed the driver.

I noticed that neither
dnf module install nvidia-driver:470 nor
dnf module install nvidia-driver:470-dkms
install libGL.so or libGLU.so! What do I need to do to get the missing libraries?

Any help would be greatly appreciated. Thanks

Those libraries are part of the libglvnd package, not the nvidia driver.

Thanks for your reply!
I do have libglvnd, libglvnd-devel installed which provides /usr/lib64/libGL.so. AFAIK glvnd just dispatches to the correct libGL which seems to fail on my machine! When I run cuda applications with /usr/lib64/libGL.so I get errors that seem to relate to the use of missmatching libraries.

As an example: If I try to build the Cuda sample /usr/local/cuda/samples/2_Graphics/volumeRender I get a warning that libGL.so and libGLU.so were not found. And indeed in the search path /usr/lib64/nvidia there is no libGL.so:

$ updatedb && locate libGL.so
/usr/lib64/libGL.so
/usr/lib64/libGL.so.1
/usr/lib64/libGL.so.1.7.0

If I modify the findgllib.mk script to link against /usr/lib64/libGL.so then the executable fails with:
CUDA error at volumeRender.cpp:436 code=999(cudaErrorUnknown) "cudaGraphicsGLRegisterBuffer(&cuda_pbo_resource, pbo, cudaGraphicsMapFlagsWriteDiscard)"

Please run nvidia-bug-report.sh as root and attach the resulting nvidia-bug-report.log.gz file to your post.

nvidia-bug-report.log.gz (1.1 MB)

Driver seems fine, nvidia-uvm is loaded. Looking at the sample code, I think GL is already initialized when the failure occurs. Please check for a general cuda issue by running the deviceQuery sample and post its output.

deviceQuery looks good to me:

$ /usr/local/cuda/samples/1_Utilities/deviceQuery/deviceQuery
/usr/local/cuda/samples/1_Utilities/deviceQuery/deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "Quadro K6000"
  CUDA Driver Version / Runtime Version          11.4 / 11.4
  CUDA Capability Major/Minor version number:    3.5
  Total amount of global memory:                 12204 MBytes (12796362752 bytes)
  (015) Multiprocessors, (192) CUDA Cores/MP:    2880 CUDA Cores
  GPU Max Clock rate:                            902 MHz (0.90 GHz)
  Memory Clock rate:                             3004 Mhz
  Memory Bus Width:                              384-bit
  L2 Cache Size:                                 1572864 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            No
  Supports Cooperative Kernel Launch:            No
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 104 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.4, CUDA Runtime Version = 11.4, NumDevs = 1
Result = PASS

Just to make sure, you’re trying to run this locally, not logged in from remote?

Remotely, but I am using a vnc session (and have a physical screen attached to the machine). glxgears works just fine.

The vnc session does not connect to the real Xserver on the nvidia but uses a virtual Xserver running software GL.
To use the nvidia gpu from remote, either use x11vnc/vnc0server or VirtualGL.

Thank you very much for pointing me to the right direction!
I was not aware, that my vnc server was creating a new virtual screen instead of using :0. I gave x11vnc a try and indeed the volumeRender sample now works without errors. As always the problem was sitting in front of the screen…

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.