CUDA 11.2 GPU acceleration works while graphical output fails on Ubuntu 20.4

Hello there,

I just installed CUDA 11.2 for the first time on my new machine that is to be used for scientific computing. I am currently accessing the desktop remotely using X2Go and it runs on an AMD Ryzen CPU and an NVIDIA GeForce RTX 3090 GPU.

I have gone through all the installation steps and verified the integrity of the installation by running deviceQuery succesfully. I can run applications that require CUDA acceleration, but whenever I am trying to display graphical output such as from the CUDA sample nbody, the output window immediately crashes. For the aforementioned sample code, I get the following output:

> Compute 8.6 CUDA device: [GeForce RTX 3090]
CUDA error at bodysystemcuda_impl.h:184 code=999(cudaErrorUnknown) "cudaGraphicsGLRegisterBuffer(&m_pGRes[i], m_pbo[i], cudaGraphicsMapFlagsNone)"

Moreover, I when I try to run nvidia-settings, I get a blank window and the terminal output ERROR: Unable to load info from any available system. I have searched extensively for a solution on the forum and on stackexchange and tried tampering with xorg.conf (which I have reset), reinstalling CUDA and the NVIDIA driver and much more, but nothing has resolved my issues. Another note is that my primary graphics on Ubuntu and the default OpenGL renderer is llvmpipe and not NVIDIA. I suspect that this is related to my issue.

Here is the nvidia-bug-report.log.gz (283.4 KB) some additional information:

nvidia-smi

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03    Driver Version: 460.32.03    CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce RTX 3090    On   | 00000000:61:00.0 Off |                  N/A |
|  0%   45C    P8    30W / 350W |     17MiB / 24268MiB |      1%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      3401      G   /usr/lib/xorg/Xorg                  9MiB |
|    0   N/A  N/A      3514      G   /usr/bin/gnome-shell                6MiB |
+-----------------------------------------------------------------------------+

$ glxinfo | grep OpenGL
OpenGL vendor string: Mesa/X.org
OpenGL renderer string: llvmpipe (LLVM 11.0.0, 256 bits)
OpenGL version string: 3.1 Mesa 20.2.6
OpenGL shading language version string: 1.40
OpenGL context flags: (none)
OpenGL extensions:

Thank you for your help,

John

X2Go is a virtual Xserver, running in software, not on the real, nvidia driven Xserver. If you want to use cuda/GL interop, you have to connect to the real Xserver, e.g. by running x11vnc. Maybe using virtualGL X2Go can use it also.

Alright, thank you so much for your answer. That makes a lot of sense!

Cheers