Can't get glxinfo to use nvidia gpu over ssh (Ubuntu 18.04.5 LTS, dual RTX2080)

I am attempting to set up a remote server to run OpenGL on Nvidia GeForce RTX 2080 SUPER, on Ubuntu 18.04.5 LTS. The machine has two Nvidia cards. My eventual goal is to be able to run pybullet over ssh, but I think the issue limiting me right now is that OpenGL is running on the default Intel GPU as opposed to the Nvidia GPU.

nvidia-settings yields
ERROR: Unable to load info from any available system
(nvidia-settings:3476): GLib-GObject- CRITICAL **: 23:49:17.890: g_object_unref: assertion ‘G_IS_OBJECT (object)’ failed
** Message : 23:49:17.894: PRIME: No offloading required. Abort
** Message : 23:49:17.894: PRIME: is it supported? no

glxinfo | grep render yields
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
direct render ing: No (If you want to find out why, try setting LIBGL_DEBUG=verbose)
GLX_MESA_multithread_makecurrent, GLX_MESA_query_ render er,
OpenGL render er string: Intel® Iris™ Plus Graphics 655

glxgears shows the gears but they are not rotating, and returns this error:
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast

nvidia-smi yields
| NVIDIA-SMI 455.38 Driver Version: 455.38 CUDA Version: 11.1 |
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
| 0 GeForce RTX 208… Off | 00000000:0B:00.0 Off | N/A |
| 18% 32C P8 17W / 250W | 53MiB / 7979MiB | 0% Default |
| | | N/A |
| 1 GeForce RTX 208… Off | 00000000:0C:00.0 Off | N/A |
| 18% 30C P8 3W / 250W | 5MiB / 7982MiB | 0% Default |
| | | N/A |

| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
| 0 N/A N/A 1165 G /usr/lib/xorg/Xorg 51MiB |
| 1 N/A N/A 1165 G /usr/lib/xorg/Xorg 4MiB |

My best guess is that GLX is configured to use the intel instead of nvidia hardware, but I can’t figure out how to fix it. Do you have any suggestions?

Among many other things, I have tried reinstalling the driver, using sudo apt install nvidia-driver-455
but the problem was not fixed.
I have also tried editing /usr/share/X11/xorg.conf.d/10-amdgpu.conf and /usr/share/X11/xorg.conf.d/10-nvidia.conf (based on [nvidia-xconfig doesnt do what i want it to, nor does nvidia-settings](https://this thread) to no avail.

Thank you very much.
nvidia-bug-report.log.gz (423.4 KB)

Your host system just has two 2080s. The problem is, when running glxstuff over ssh, it tries to use indirect glx to display on the client where you’re connecting from. Which doesn’t work, it’s tying to use software raster and displays something intel.
To use the Xserver of the host, you’ll have to set at least the DISPLAY variable like
DISPLAY=:0 glxinfo
Also, the XAUTHORITY variable might have to be set.
Please post the output of
ps a |grep X
on the remote host.
Or are you trying to render on the remote host and display on the client?


ps a |grep X -->
1165 tty7 Ssl+ 0:01 /usr/lib/xorg/Xorg -core :0 -seat seat0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch
4601 pts/2 S+ 0:00 grep --color=auto X

I was initially trying to display on my laptop, but I would be happy if even just a vnc remote desktop worked (which I presume would display on the remote host)?

Ok, that clarifies it.
The most simple setup would be to start x11vnc on the remote host and then connect to it from the client. As root:
DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 x11vnc
Otherwise, you would have to use a quite complex VirtualGL setup.

So the issue is that I was trying to do this over ssh?
I will try this with vnc, and see if I can figure it out. Thanks