CUDA on Windows Server 2003 Is CUDA available for Windows Server2k3?

Are there any compatibility issues regarding CUDA 0.8.1 SDK for Windows XP installed on a system running Windows 2003 Server 32-bit edition?

Windows 2003 Server has not been tested or qualified for use with CUDA.

I’ve got CUDA running on 32-bit Windows Server 2003. It seems to work as it should.

HOWEVER: Just like with normal Windows XP, you won’t be able to access the card remotely, only when logged in as a local user.

I’d like to know if Nvidia have any plans to enable remote desktop access to the G80? If this cannot be solved by tweaking windows I guess that Nvidia will have to provide a driver which does not present the G80 card as a “graphics card” but rather as a dedicated compute card that should be available to all types of users (one application at a time is OK).

I’d also like to know if this issue is non-existant under Linux. Switching OS is definitely an option considering, we just put the the card into a Win 2x3 server that we already had running and hoped for the best…

I’m not sure I fully understand what you want to do, but on Linux the actual CUDA program has to run on the same computer that physically contains the GPU. It is very easy to operate the program remotely, as you can ssh to the Linux machine from another computer, but the program is on the remote computer, not the client machine.

We use our card exclusively this way, though we do have the X server started on the GPU machine, but it is just at the gdm login screen. No user has to be locally logged in. Other forum posts have suggested that even starting X is not necessary if you force the nvidia kernel modules to load.

Thanks, that answers my question :) (i.e. “is it possible to use CUDA when logged in remotely on a Linux machine?”)

Another one:
What happens when several users are logged in at once - can the card(s) be shared between several users, and to what extent? Can only one process that links to the CUDA libraries run at any time, or can several programs allocate memory and execute kernels as they like? Or something in between…?

From what I’ve seen in the forum, it is more restrictive than that. Each card can only be accessed by a single thread. So even within an application, you have to select a thread (assuming you have more than one) to be the only thread to interact with a particular graphics card.

Since you can have multiple threads, it is easy for one application to use many GPUs in CUDA mode, but it is not possible (as far as I know) for many programs to use one GPU.

As far as I have experienced, the device cannot be used simultaneously by users. This is basically because you cannot run multiple kernels at the same time.

It is also true what seibert said but as the resources are therefore local to each process, several users can be using CUDA at the same time - they just cannot run kernels at the same time.

You can check this easily when running CUDA on the device the X server is on. During kernel execution X freezes and other requests to run a kernel just block, but there is no interruption when doing other CUDA operations. So if you can schedule the resource allocations and kernel executions through a demon, it might well work for you.

Peter