CUDA-capable card in SUN-AMD servers?

hi all,

Trying to make my first steps into CUDA programming, I want to know if I can install a CUDA capable card in a (second-hand) SUN-X2100 server.

I found this “http://www.nvidia.com/object/tesla_compatible_platforms.html” but it lists X2200 M2 servers as most similar server, and the table is titled “Systems for use with Tesla S1070”.
But I try to understand the product namings from nVidia, and see that a S1070 is a server, not a card, containing T10 cards…

So trying to dig further into Internet, I cannot find any specifications about that T10 card. The best I could find is that T10 cards are second-generation cards introduced in 2008 with something referred to as the GT200 (seems to be their codename for the architecture used), and they would have 240 cores (please correct me if that’s wrong)

The website of nVidia lists only servers and their specs, but not individual cards.

Also, I see them announcing their “Fermi” cards, which would be even a lot more faster and energy-efficient than other older cards. Of course they will also be tremendously more expensive.

So my question remains unanswered : which type of card can I install into a SUN-X2100 server (1U), knowing that I will have to run some flavor of Linux on it to get CUDA support, and I am not funded by some rich institute.

If anyone can guide me in the right direction, it would be very welcome.

Rob

The Telsa card model numbers are the C1060 (which uses the GT200-series GPU in common with the GTX 275/285/295), the C2050 and C2070, which use the “Fermi” (or “GF100”) series GPU in common with the GTX 470 and GTX 480.

If you are not well funded and taking your first steps into CUDA, I would stay away from Tesla entirely. You can get started programming CUDA with one of the GeForce cards instead. (And aside from double precision performance in the Fermi series of cards, some GeForce cards can actually be slightly faster than the Tesla equivalent.)

Now, you mention that this is a 1U server, which unfortunately means that most all of the high end CUDA cards probably won’t fit. Those cards are full length, two PCI-slots wide, and require multiple PCI-Express power connectors not found in most servers (in my experience). To use a high-end GPU, a 1U server would almost certainly need to be connected to an S1070, which is four C1060 (or a variant of it) inside a 1U enclosure that provides power and cooling and communication to the host computer over external cables leading to 1 or 2 small PCI-Express interface cards that easily fit in 1U servers.

However, you might be able to fit a mid-range CUDA device into your computer, although I have no experience with the server you mention. The GT240 is not a bad little card: 96 CUDA cores @ 1.34 GHz and up to 54 GB/sec memory bandwidth the model with GDDR5 memory. That’s less than half the speed of a C1060, but it’s only $100 and requires only one 6-pin PCI-Express power connector.

(Edit: I should also note the GT240 does not do double precision floating point math. If that is a requirement, then you basically have no options for your 1U server, aside from the S1070.)

So barring anyone posting direct success stories with your particular server, you might have to do some research:

  • Is there a 6-pin PCI-Express power cable available on the power supply? If not, you are pretty much out of luck since no CUDA device worth using on a server can run purely on the PCI-Express slot power.

  • How large are the PCI-Express slots? Are they low-profile? There are low profile GeForce cards even slower than the GT240, but again, I’m not sure they would be worth using.

  • You will have to run Linux on this computer. CUDA is only supported on Mac/Win/Linux, and there is Linux support for both 32-bit and 64-bit x86 distributions. The wrong version of gcc can be kind of cranky about some of the CUDA compiler headers, so if you have a choice, you should use one of the Linux distributions recommended for the CUDA version you download. (For CUDA 3.0, that’s SUSE 11, RHEL 4.8 and 5.3, Fedora 10 and Ubuntu 9.04. The CUDA 3.1 release, in beta for registered developers now, moves to SUSE 11, RHEL 4.8 and 5.4, Fedora 12 and Ubuntu 9.10.)

To be honest, I expect that the power or space limitation will preclude being able to install any CUDA device in your server. I haven’t seen any 1U servers that were designed for full-sized, moderate to high-power draw PCI-Express cards. In that case, I’d look for a spare mid-tower computer and consider either the GT240 (w/ GDDR5) for ~$100, or perhaps one of the newly released Fermi-based GTX 465 for ~$280, depending on your budget. (I would have suggested one of the GTX 200-series cards, but they seem to have evaporated from the supply channels now that the GTX 400 series is shipping in quantity.)

It appears that the X2100 only has PCIex8 slots (although the specs are ambiguous.) A PCIex16 slot is needed for a CUDA-compatible GPU. This is not a matter of GEN 1.0 or 2.0 or the number of lanes, but rather the actual size of the slot.

Actually the GT210/220/240 series do not require supplemental power. A single-slot model GT240 MIGHT fit in some 1U servers, but these tend to run hot and require good air flow to keep from burning out.

Ah, looks like you are right. Must have been fooled by a side view photo, but the NVIDIA spec says the max power draw for a GT 240 is 69W, which only requires slot power.

You can probably connect a Tesla S2070 using the PCI-e x8 HICs. But someone will have to check if
the x8 HIC fits into the slot and if the host is compatible with S2050.

See http://www.nvidia.com/docs/IO/43395/SP-04975-001-v04.pdf for info on the HICs, and S2050 system.

You should contact NVIDIA sales center : tesla@nsc-nvidia.com