The Telsa card model numbers are the C1060 (which uses the GT200-series GPU in common with the GTX 275/285/295), the C2050 and C2070, which use the “Fermi” (or “GF100”) series GPU in common with the GTX 470 and GTX 480.
If you are not well funded and taking your first steps into CUDA, I would stay away from Tesla entirely. You can get started programming CUDA with one of the GeForce cards instead. (And aside from double precision performance in the Fermi series of cards, some GeForce cards can actually be slightly faster than the Tesla equivalent.)
Now, you mention that this is a 1U server, which unfortunately means that most all of the high end CUDA cards probably won’t fit. Those cards are full length, two PCI-slots wide, and require multiple PCI-Express power connectors not found in most servers (in my experience). To use a high-end GPU, a 1U server would almost certainly need to be connected to an S1070, which is four C1060 (or a variant of it) inside a 1U enclosure that provides power and cooling and communication to the host computer over external cables leading to 1 or 2 small PCI-Express interface cards that easily fit in 1U servers.
However, you might be able to fit a mid-range CUDA device into your computer, although I have no experience with the server you mention. The GT240 is not a bad little card: 96 CUDA cores @ 1.34 GHz and up to 54 GB/sec memory bandwidth the model with GDDR5 memory. That’s less than half the speed of a C1060, but it’s only $100 and requires only one 6-pin PCI-Express power connector.
(Edit: I should also note the GT240 does not do double precision floating point math. If that is a requirement, then you basically have no options for your 1U server, aside from the S1070.)
So barring anyone posting direct success stories with your particular server, you might have to do some research:
-
Is there a 6-pin PCI-Express power cable available on the power supply? If not, you are pretty much out of luck since no CUDA device worth using on a server can run purely on the PCI-Express slot power.
-
How large are the PCI-Express slots? Are they low-profile? There are low profile GeForce cards even slower than the GT240, but again, I’m not sure they would be worth using.
-
You will have to run Linux on this computer. CUDA is only supported on Mac/Win/Linux, and there is Linux support for both 32-bit and 64-bit x86 distributions. The wrong version of gcc can be kind of cranky about some of the CUDA compiler headers, so if you have a choice, you should use one of the Linux distributions recommended for the CUDA version you download. (For CUDA 3.0, that’s SUSE 11, RHEL 4.8 and 5.3, Fedora 10 and Ubuntu 9.04. The CUDA 3.1 release, in beta for registered developers now, moves to SUSE 11, RHEL 4.8 and 5.4, Fedora 12 and Ubuntu 9.10.)
To be honest, I expect that the power or space limitation will preclude being able to install any CUDA device in your server. I haven’t seen any 1U servers that were designed for full-sized, moderate to high-power draw PCI-Express cards. In that case, I’d look for a spare mid-tower computer and consider either the GT240 (w/ GDDR5) for ~$100, or perhaps one of the newly released Fermi-based GTX 465 for ~$280, depending on your budget. (I would have suggested one of the GTX 200-series cards, but they seem to have evaporated from the supply channels now that the GTX 400 series is shipping in quantity.)