When I try to run the CUDA samples under Remote Desktop, I keep getting “There is no device.” errors. Is this a supported scenario?
I suspect that this is due to the fact that the Geforce8800 is the primary display adapter. Once my new motherboard comes in I will try this again with the card as the secondary adapter.
But VNC sucks. Would ‘Remote Desktop’ work if the 8800 isn’t being used at all (by the OS)? Logic dictates ‘Remote Desktop’ would only try to replace remote accesses to active display adapters with local equivalents. A video card not mapped to a monitor isn’t active and is just considered another piece of hardware. Unless in general ‘Remote Desktop’ limits remote access to any/all hardware on the [remote] machine (then that would make ‘Remote Desktop’ suck too).
I’m using RemotelyAnywhere to access my machine over the internet to be able to work with CUDA on a remote location. It might not be as good as Remote Desktop but it does the job quite well.
Geez I don’t have that kind of money. I had to spend it to get the 8800 :)
Obviously I’m a dev. I didn’t want to have to take on another development task but writing some tools would allow Remote Desktop to work the way I want it to.
Anyway I’m going to try Remote Desktop with the 8800 as the secondary adapter. I’ll let you know how it goes. Otherwise I get to write some more code.
I think that the problem is that remote desktop doesn’t provide access to the display driver on the remote machine. It’s a limitation of remote desktop, not of CUDA.
You should be able to use vnc or possibly ssh (both work under linux).
Although the GoToMyPC website is lacking on technical details, it would appear they are essentially doing what VNC does. (Capturing screenshots and then sending them…) Hence why it works.
It looks like RemotelyAnywhere also does the same thing. Anything that just sends screenshots should work.
For those who still find or reference this thread, the new chrome remote desktop extension works quite well. Faster than VNC, but whatever they’re using still allows the CUDA calls through.
This is the most annoying issue!!! I wish nvidia would support Windows Remote Desktop for Geforce!!! It been driving me nuts for 2 years now! How secure is Google Chrome Remote Desktop? Im not sure I could risk using it to access my work machine :/
I specifically bought Windows 7 Professional for all my machines so I could still develop Cuda over Remote Desktop only to find it doesn’t work!!!
I have 3 water cooled Geforce 590s so that driver won’t help :/
I’ve been looking for a water block to fit a Tesla, but to be honest I don’t need the extra memory provided by the Tesla’s I just need the cores!! I flashed one of the 590’s to show as two M2090 Tesla’s, but only one of them actually worked as I guess the internal bridge between the two devices (590 is essentially 2x580) wasn’t correctly configured, but it still proved the Geforce are perfectly capable of running in TCC mode provided you have more than one device (You need at least one device for windows to use as display device).
As far as I understand, Windows remote desktop completely bypasses the normal WDDM graphics driver, there is nothing NVIDIA’s driver can do about that. If use of the TCC driver for Windows is not an option, I would suggest using VNC or similar tools.
I have successfully used TeamViewer to work with WDDM devices running CUDA code. It’s a Windows app, but it has also been ported (albeit using Wine) to Linux. In my opinion it works slightly better in low bandwidth conditions compared to VNC.
Is working with Windows really a must? I use SSH and VNC (sometimes even SSH with X when going to nsight), and it works like a charm.
As it was said before, this is Windows issue, not nVidia.
I use Windows Remote Desktop to connect to Amazon’s HPC instances. I can run CUDA-based code there without problem. So, it is technically possible to access GPU’s through a Windows Remote Desktop connection. I have no idea what Amazon is doing differently.
I know of a cool hack to use CUDA over remote desktop. Basically, I found that if you start a CUDA application in non-remote desktop mode, and take over through remote desktop, it still runs on the GPU!
This doesn’t seem very useful by itself if I need to rebuild my program and test again. Maybe that process can create child processes that also have a GPU attached. Then you can use a shell to launch whatever you want.
This phenomenon suggests there might be some secret API call that lets you attach to a real GPU backed display to a remote desktop process.
Has someone who’s programmed for WDDM know of such a function?