Connecting CUDA to specific X server?

Does anyone know if it’s possible to connect CUDA to a specific X server? What I’ve got right now is a remote headless node with an 8800GTX in it. I’d like to be able to run my CUDA code on that, but have all my other windows show up piped to the operator workstation just like normal.

So it’d be [Operator Station GUI] → [Backend CUDA Processing on remote machine]

Any thoughts?

CUDA doesn’t connect to a specific X server rather than a specific GPU as presented by the NVIDIA kernel driver. So it doesn’t matter whether there is an X server running on the G80 (for functionality). If there is no X server, you don’t suffer from the watchdog timeout.

Your “normal windows”, how are they drawn? An additional graphics card or the G80? How do you “forward” the windows? Xlib pipe (ssh tunneled?) or VNC screenshots? There are so many possibilities, please give more detail.

Peter

Sorry, the normal windows are just things like emacs, some raster windows, and they’re just forwarded over an SSH tunnel back to my workstation, nothing fancy. It’d just be great if I could tell cuda “Connect to the card on the local machine” while still telling everything else to forward to the remote machine.

Hmm, on further examination it seems that that is the normal behavior for CUDA. I must have been confused, sorry!

Yes. Just be warned of the watchdog if the remote X server has claimed the G80 (even though it only shows the xdm login).

Peter