At work, we’re spec-ing out a new cluster that will also double as a GPU computing testbed. Ideally, we want to take one nice CPU card (Tesla, GTX295, etc.) and put it in each node. I have met some resistance from the IT guys as to whether this can be done. Any opinions? At the least, some ATX mobo compatible, rackmountable case would make this a moot issue. Of course, I’d really like some S1070’s but that’s not gonna happen. So any past experience in something similar, thoughts on hardware to make it work, or other suggestions would be appreciated.
For sure this can happen. You may not be able to stuff a double-wide GTX295 into a 1U rackmount case, but the Tesla series definitely could be used with a 1U, as it sits in its own rack slot.
Other than physical rack space constraints, it might be power supply issues that need figuring out, but it can be done. Wish I had a concrete success story for you … not yet.
Your conclusions seem to match up with other reports in the forum. If you want to buy off-the-shelf 1U or 2U compute nodes, you pretty much have to use the Tesla S1070 for GPUs. Compact servers are not designed for the cooling, power, and volume requirements of a double-slot 200+ watt PCI-Express device.
If you plan to build rackmount nodes with cases that let you use standard computer parts, then the task is quite easy. It’s just a workstation in a funny shaped case. The parts suggested here:
[url=“High Performance Supercomputing | NVIDIA Data Center GPUs”]Page Not Found | NVIDIA
should give you some ideas of what to buy, although that page describes how to build a multi-GPU system. For a multi-GPU system, you could definitely scale back the PSU.
Update: Obviously, I mean for a single GPU system, you could scale back the PSU. :)
:D well put sir