what are the advantages and disadvantages of using four C1060 vs. one S1070?
I know that with the 500 series S1070 one gets a 10% increase in the GPU frequency.
Other than that, is there any advantage of using one S1070?
Are the 16 GB there shared between the 4 GPUs, so that once the data is in the GPU memory then
there is no need for it to be transferred from card to card, as would be the case if using four C1060s?
The fact that the power supply and cooling are guaranteed to work. Feeding and cooling 4 C1060’s is no easy task. S1070’s are nicecly packaged for building clusters, too.
Nope, a S1070 still shows up as 4 independant GPUs.
Two PCIe gen2 x16 connectors. Each connects to two cards in the 1070.
I believe it has to go through the host (at least currently)
there are also x8 (physical) connectors available. If a physical x16 slot is x8 electrically that should just work as far as I understand.
There are small racks, Don’t know if vertical is an option, blade servers are often vertically, but they are in a special enclosure. You might want to do a google search. I have found small 4 to 8 U racks on wheels military grade last time I was looking.
There are 2 pdf’s on the S1070 page on nvidia’s website that provides a lot of detail, as far as I remember also about the PCI-E cards available for S1070.
The might be a lot of noise from the case/PSU fans in addition to the C1060 fans. We recently purchased a 2U HP server whose case fans are so loud at boot, it sounds like a jet aircraft is near by. (Once the BIOS inits, the temp control kicks in a throttles the fan down to a dull roar.) When I first powered it on, everyone within 50 feet of my office came over to see what the racket was. It was a lesson in just how much rackmount case makers don’t care about noise. :)
Four C1060s will probably be pretty loud compared to a normal workstation. I can definitely hear the two GT200 cards in the workstation next to my desk. But they are nothing like the rackmount servers I maintain. (Granted, no S1070 in that list. It would be worth pressing NVIDIA to replace “TBD” with an actual noise measurement.)
In every HPC server room I’ve ever toured, the sound of the running air conditioning equipment was much louder than the 1000’s nodes spinning their fans :)
My dual quad core Mac Pro is virtually silent, so this is almost certainly true :)
I guess the question really is would it be loud enough to disturb people working near it. And related to this question is whether it would involuntarily serve as a foot warmer in winter.
I’m assuming the S1070 is x16 gen 2 for all 4 slots, but each pair of slots is sharing the connection back to the host via a single x16 slot. It’s not clear how that impacts the bandwidth. Does that reduce it by half?
There is yet another consideration, the 4x Tesla C1060 systems that I see described generally do not have more than 16 GB of system RAM, but some memory must be dedicated to the OS itself, which implies it’s not possible to have a one-to-one correspondence, which, for some reason seem to be recommended. Presumably, a host system connecting to a S1070 would be a rackmount and, therefore, could probably be configured to have enough to achieve that.
No, even if you run it at peak load for a few days. It’s surprisingly not that loud (and I’m comparing to similar very quiet machines)–there’s noise, but it’s nothing ridiculous.
Depends which way the fan vents blow.
I’ve had a 4xC1060 system sitting on a desk five feet behind me (next to two other people) for a month or two now as a testbed, and the noise is not bad at all. Inaudible in a normal office environment, and the frequency of the noise is such that even though it makes noise it’s not annoying.
On the same web page, I referred to earlier it says
Velocity Micro is offering a system with this board and four cards, http://www.velocitymicro.com/wizard.php?iid=174. And they offer it with up to 32 GB RAM! How do they do it? Well, it turns out they are removing the fans/shroud from two of the cards and replacing them with own liquid cooling system. I also learned they are using custom-made 8GB DIMMs. The system is 17.72 inches wide, which implies it could be mounted sideways in a rack. It’s very expensive, but admittedly impressive. B)
Is C1060 slower than a single GPU of S1070?
Though we use windows XP, our problem may be relative to the topic.
We have two HPC systems, one has two desktops connected to one S1070 and the other is two desktops each having two C1060 cards. We tried to run the same bench mark program using ONE GPU on both systems and observed that C1060 is about 10% slower than S1070, though we expected the performance should be the same.
The desktops have the same CPUs and other configurations, except the GPUs. The graphics card information is the same:
Driver version: 197.03
CUDA Cores: 240
Graphics clock: 610 MHz
Processor clock: 1296 MHz
Memory clock: 800 MHZ(160 MHz data rate)
Memory interface: 512-bit
Memory: 4096 MB
Bus: PCI Express x16 Gen2
The only difference I can see is Video BIOS version, C1060 has 62.00.62.00.07 and S1070 has 62.00.62.00.09.
Anybody have the same experience or could give us some idea why C1060 is slower?