Used C1060s? Where to find?


If this post is not appropriate for this forum section then my apologies. I didn’t see an “everything else” forum section.

Has anyone found a place that has used c1060’s for sale? eBay’s useless for this and I’ll be picking one up out of pocket (not covered by work) so I’m trying to save some coin if I can. Also, do you think there will be a price drop on them when the Fermi line of cards starts to come out?


If you are paying out of pocket, are you sure a GTX 285 won’t do the trick? There are even 2GB models which are still a quarter of the price of a C1060, if you need more than the standard 1GB video memory on the 285.

Hello seibert,

That’s a very good suggestion. I checked the specs on that card and it also has 240 cores on it and can be bought for about $400 USD. In a nutshell, what is it about the C1060 when it comes to CUDA performance that is makes it cost almost $800 USD more, if you don’t mind? I found this thread on Tom’s Hardware forum that says the C1060’s higher price is strictly due to the advanced tech support you get with the C1060 and that isn’t any faster than the GTX, but I figured I’d ask to see if that’s correct:

GTX vs C1060

Is it the 4GB of RAM on the C1060 vs. the 1 to 2GB on the GTX 285? I could live with having 1 or 2GB of RM and that’s a heck of a price savings.

That thread also implies that the C1060’s GPU may be based on the same GPU as the GTX anyways.


The C1060 is actually a bit slower than typical consumer GTX 285 cards - both the memory and shader clocks are more conservative on the Tesla. The GPU silicon itself is otherwise identical, but the consensus is that there is binning, and the Telsas are harvested for power dissipation to fit into stricter TDP envelopes for reliability reasons. For high availability applications or TDP sensitive environments (like compute clusters) the Tesla versions might make sense, but for less critical applications the Geforce versions are generally fine.

For what it is worth, we have a smallish commodity compute cluster with Geforce versions of the GT200b, and we have had no problems with them so far.

Yeah, you are pretty much paying for more memory and better quality assurance testing. The Tesla cards are designed for 24/7 usage in cluster situations (and designed for cluster-sized budgets) but it is the same GPU in both cards. I’ve been using GeForce cards for CUDA for a few years (not in 24/7 usage yet, but week long jobs) and only had 1 out of 10 cards fail on me after a year.

If you get a GeForce card for CUDA, the only thing to watch out for are the overclocked models. Under the constant load of a long CUDA job, the overclocked cards are more likely to overheat, and it really isn’t worth the 5-10% speed improvement.

With your cluster, were there any hardware resource contention (interrupts, etc.) or driver conflicts you had to work around, due to having multiple cards with video display components in the same system? I’m asking because I’m assuming, possibly incorrectly, that you are using multiple video cards with GT200b GPUs to make your cluster, as opposed to the C1060 which does not have video display components on it. For example, I worry that multipe GTX 285 cards in a MS Windows box could cause the O/S problems at the driver or hardware interrupt level.

Also if you don’t mind saying, how many cards are in your cluster, what wattage power supply are you using with it, and what O/S?



Thanks for the overclock warning. I saw a 2 GB GTX 285 card on CompUSA that was overclocked. Hopefully I can find a 2GB card that isn’t or perhaps I can turn off the overclocking?

Were you able to get warranty service on the failed card and any tips on getting a good price? I’m starting with one card, but I do want to scale up to my own cluster soon enough. Also, if you don’t mind saying, what is the max number of cards you have in any one system and what wattage power supply are you using?


I think you misunderstand what I mean by cluster. We don’t have multiple video cards in the same system - just a single GT200b in each node. 16 nodes in total, 8 with GPUs and 8 without, coupled by gigabit ethernet and SDR infiband.

We have 8 cards in 8 discrete nodes, each powered by its own 550W power supply. The whole thing runs Centos 5.2 linux., statelessly provisioned at boot using Perceus

I haven’t needed 2GB of memory for anything, so I have only bought the standard cards. Usually, I buy EVGA, and they have a reasonable warranty process. I’ve been traveling and had not had the time to deal with the GTX 280 yet, though. The RMA website for EVGA now requires that I call technical support before mailing a card back, even if it has clearly failed (random colored characters on boot), and I haven’t had a chance to put the card back into a computer so I can go through the tech support charade. I expect it will be fine, but I haven’t got a finished success story to tell you yet.

As for good prices, I usually just go to Newegg and figure that’s good enough.

I have several systems, with varying numbers of cards. For a while I was running a GTX 295 and a GTX 260 on an 850W power supply in the same computer, and GTX 275 + GTX 285 on a 750W supply. I have a 4x GTX 295 system (so that’s 8 CUDA devices) running on a 1250W supply, but that is a very custom setup that I would not recommend trying unless you have a specific need. Normal computers cannot physically accept four double-slot graphics cards.

These were all Scientific Linux 4 and 5 computers (basically the same as RedHat Enterprise and Centos), so I have no experience running CUDA on Windows.

Thanks to both of you for all your input. I think I’m going to get an EVGA GTX 260 to start out and to get my feet wet with CUDA. It has 216 cores and I can get one brand new for only $180, unless you think the GTX 285 has such a commanding performance edge it’s worth paying twice the price for a GTX 260.

It’s astonishing how much computing power you can get for under $200.


GTX 260 is a great device to learn CUDA with. Has all the same compute features as the GTX 285, with a modest reduction in memory bandwidth and floating point performance. No point in buying something faster until you have a specific application.

for home usage you are probably better off with a geforce 200 if you don’t need the extra memory. They are overclocked (my gtx285 by about 30% compared to the tesla/quadro fx5800) but that shouldn’t be such a problem with a workstation. It is possible to downclock them if you install the nview desktop manager (under windows, IIRC over/under clocking is built into the linux driver)

The big differences to the tesla appart for the memory is that the tesla is manufactured and supported by nvidia where as with the geforce, nvidia only manufactures the GPU and doesn’t give support. Overclocking is a thermal issue (tesla are designed for 24/7 cluster/workfloor environment where cooling is a more seveare issue and failure means lots of costs due to down time).

I know people who put either 4 c1060 in one box or 3 gtx 295. The 3 295 give more computing power but are mostly suitable for a workstation setup and can be a challenge to cool. It did require new drivers to solve the thermal issues though.

Also take note that if you put a tesla in you also need another nvidia card for the drivers to install (can be an on-board chip)

As for a power supply, you need around 180-200W overhead for each card. If you use one you can settle for a 550-600W power supply. If you want to allow more cards you will need more than that (which does bump up costs). For the four teslas or 3 295 I would go with a 1200W power supply personally.