CUDA with OpenGL Tunneling through X11 A tricky problem for an OpenGL newbie.

I may have bitten off more than I can chew :X , but this is what I would like to do:

  1. Run a CUDA program on a Linux host
  2. Transfer the rendered image to a buffer
  3. Display the buffer with OpenGL over an X11 tunnel to a non-CUDA-GPU Windows box.

However, probably because either X11 or my local host don’t support the extensions,
my program fails when checking for

and therefore I cannot continue.

If I try to initialize glut with -display “localhost:10”, I get the same error as above, plus it causes a segmentation fault in my CUDA program. The other glutInit flags don’t help either. :blink:

If I try to initialize with -display “localhost:0,” I can tell that more extensions are supported, but my CUDA program has a segmentation fault.

I have tried to base my code on the CUDA SDK “Fluids” and “Simple OpenGL” example, but I think this example was intended for a collocated GPU. :no:

Any tips? Examples?

My experience with OpenGL over ssh-facilitated X11 Forwarding is pretty limited, but I believe the problem is that your program is trying to push the OpenGL rendering to your Window graphics card. It looks like that isn’t even working due to limitations of your Windows X server. (The one time I did get OpenGL-over-ssh to work, it was connecting from a Mac to a Linux system.)

Even if OpenGL forwarding did work, CUDA’s OpenGL interoperability features would almost certainly fail because your CUDA data would live on the graphics card sitting in the Linux system, and your OpenGL buffers would be on the graphics card in the Windows system.

Given the lack of OpenGL support in your X server, your best bet will be manually copying your CUDA results back to the CPU for visualization using a non-3D X11 drawing library. You won’t get a particularly great framerate, but since you are looking at X11 over the network anyway, you can’t expect much. :)

I seem to recall discussing this with someone at one point in time, and the answer is that it’s not possible. CUDA interop with generic OpenGL over X11 doesn’t make sense and certainly doesn’t work.

Copying to the CPU and then doing your OpenGL calls should work fine, though.

Thanks. I’ll try your suggestion. I’m trying to improve a previous solution that dumped an image to a file and opened it with an X Windows image viewing tool. I figured an OpenGL alternative would be nicer.

Avoiding glut and using some glx examples from the internet, I was able to get what I needed.

Major problem with OGL over X is that the OGL commands all get send to the client, so except for CUDA you have no profit of the high-end card on the server. Especially if you low end thin-client card is missing all the interesting OGL features.


try using VirtualGL via TurboVNC

This help to do remote OpenGL rendering.

On linux server install virtualgl server and TurboVNC server.

In xorg.conf see for presence of

Section “Module”


Load           "glx"

Load           "extmod"


Make sure, Your correct OpenGL graphics device is addressed

in xorg.conf. We use a NVS290, but even Tesla nodes C870

should work, in principle.

As client logged in to linux server by ssh run the server:


(path to vncserver might be different).

Open TurboVNC client on Your client, connect to display number given using

fully qualified server name such as

A window with plain X session should start

(window manager controles in ~/.vnc/xstartup)

For control in terminal run

vglrun glxgears

vglrun glxinfo

vglrun is delivered with virtualgl.

In the TurboVNC client You can tweak the image quality

by using jpeg settings.

If all this works you should be able to run

vglrun /usr/local/NVIDIA_CUDA_SDK/bin/linux/release/nbody -device=0

or other GPU-devices, depending on Your config. Also try

vglrun /usr/local/NVIDIA_CUDA_SDK/bin/linux/release/volumeRender

We use Sun Fire Server, NVidia Tesla S870, RHEL 5, CUDA2.0,

NVIDIA GLX Module 177.67. Runs smoothly on Gigabit connection,

but VNC tunnels through VPN do work as well (starting by at last 1Mbit).

Have fun.