We are going to set up a GPU server with 2 Xeon quad core CPUs and 4 Tesla C2050 GPUs for the students of a CUDA programming graduate course. Several students should be able to use the server simultaneously and do their CUDA programming assignments on it. We have 2 options for OS, Linux or Windows Server 2008 Enterprise Edition.
Here I have two questions:
1- Which one of the operating systems I mentioned is better for this purpose and why?
2- Is it possible for all students to login to the server simultaneously (using SSH in Linux or remote desktop in windows) and run their own programming assignments at the same time? I wonder what happens in this scenario and how the 4 GPUs will be assigned to the users. As far as I know each user will be able to use all 4 GPUs and in Tesla series up to 16 kernels can be executed simultaneously. Am I right?
I have only worked with CUDA on a single user windows machine with a single GPU and so I have no idea what happens exactly in a multi-user machine in terms of GPU resource sharing.
Any advice you could give me would be greatly appreciated.
Thank you all for your help.