Application not using my Nvidia Card. Multiple GLX Errors. Works fine running on local. RTX A4000. RH 8.5

Hi,
I’ve been looking for a solution for weeks.

I setup a machine with red hat 8.5. I am trying to run some cfd software.
Everything works fine when I run it locally.
When i try to run it through ssh it launches but will not use my nvidia card but only the intel one as seen on the graphics report :

Graphics Report
library: STAR Mesa OpenGL
LP_NUM_THREADS: 0
KNOB_MAX_WORKER_THREADS: 1
window: OSMesaOffscreenRenderWindow
OpenGL vendor string: Intel Corporation
OpenGL renderer string: SWR (LLVM 6.0, 256 bits)
OpenGL version string: 3.3 (Core Profile) Mesa 18.1.2
OpenGL shading language string: 3.30

While remote it seems to me that my x server keeps crashing. If i run glxgears i will get errors like this.

glxgears
X Error of failed request: BadAlloc (insufficient resources for operation)
Major opcode of failed request: 152 (GLX)
Minor opcode of failed request: 5 (X_GLXMakeCurrent)
Serial number of failed request: 0
Current serial number in output stream: 33

X connection to :1 broken (explicit kill or server shutdown)

Or sometimes it goes through with 50k FPS and dosen’t show the gears.

glxinfo always gives me this message :

X Error of failed request: GLXBadContextTag
Major opcode of failed request: 146 (GLX)
Minor opcode of failed request: 5 (X_GLXMakeCurrent)
Serial number of failed request: 53
Current serial number in output stream: 53.

I don’t know where to start to solve this issue.
Here is my output of nvidia-smi
Tue Oct 4 14:09:50 2022
±----------------------------------------------------------------------------+
| NVIDIA-SMI 515.76 Driver Version: 515.76 CUDA Version: 11.7 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA RTX A4500 Off | 00000000:73:00.0 Off | Off |
| 30% 30C P8 8W / 200W | 29MiB / 20470MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 2769 G /usr/libexec/Xorg 10MiB |
| 0 N/A N/A 3116 G /usr/bin/gnome-shell 4MiB |
| 0 N/A N/A 3616 G …roxy-12.0.4.7508/etxproxy 3MiB |
| 0 N/A N/A 7397 G …roxy-12.0.4.7508/etxproxy 5MiB |
±----------------------------------------------------------------------------+

I have attached my bug-report log, my installer log and also some conf files I think maybe would be usefull.

Regards

How are you connecting to what from remote? when using xrdpserver/vncserver, you’re only connecting to a software Xserver. To use the nvidia gpu remotely, you either have to use x11vnc/vnc0server or VirtualGL.

Hi, I am quite stupid, I just had to specify to the application i was trying to run that it had to use the nvidia gpu with VirtualGL.

Thank you for you answer thread can be closed.