Recommended NVIDIA Card?

I have Dell Precision 670 and Dell PowerEdge 2850 Server.

What would be the best card today which can fit in one of these machines?

I need it for development only.

Don’t know what fits in the machine, but
Value for money: GTX 260 core 216
More power, still good value, for slightly more money: GTX 275
Absolute performance, single-GPU: GTX 285
And perhaps for multi-GPU programming, two devices on a single card: GTX 295. I’m hesitant to recommend this re. value and availability, but I don’t know. Maybe someone will chime in.

I’d take the 275 probably.

All these cards are dual-slot, check to see what fits. Or open the box.

I made the research and the results shows that the limiting factor is the power

consumption. The fact of reality is you cannot just choose a card and plug in

in the computer. From all new models of CUDA compatible cards I found only

2 which could fit to any of my systems:

Quadro FX 5800 - 189W,

Quadro 580 - 40W

Here is a list of power consumption of some of the other cards:

GeForce GTX 295 - 289 W,

GeForce 9800 GX2 - 197 W,

Tesla C1060 Computing Processor - 187.8 W

Quadro FX 3800 - 108W

For these I will need to replace my power supply. Tesla for example will require at least

1,200W for the whole system. Such a power supply is around $300. It is true Tesla has

4 times more processors, but also it draws 4 times more power.

In order to justify the use of GPU we need to feed the GPU memory with enough data and

perform a lot of calculations in order to amortize the data transfer forth and back CPU

and GPU.

Now - the best transfer rate for PCIe 2.0 is 16GB/s, PCIEe 1.0 is 8 GB/s. Most of the

systems today use 1.0 including mine. I cannot just add some PCEe 2.0 card, it needs a

new system board, meaning - a new computer. A new computer with all of these requirements

comes around $2,000. If I add Tesla it would be + $1,200. Even I use the developers’ special

I don’t believe I can justify $2,700-$3,200 system only for test and development. Even for

production environment the CPU-GPU memory transfer will be always the limiting factor. Until

this bottleneck do not improves I would stick to mid range professional cards for now.

The bottom line - in order to effectively use any of NVIDIA cards you will need a brand new

high end computer too.

I will get the Quadro FX 5800 now but because I cannot just replace the PCIe 1.0 with PCIe 2.0,

the CPU-GPU memory transfer speed would be 1/2 of the maximum possible.

The GTX 260 only uses 182W, so if you can run a FX 5800, you should be able to run that. (And it is a lot cheaper.)

I’m confused which Tesla product you are talking about here. The C1060 is a single card which draws a maximum of 188W and has 240 floating point units, just like the GTX 275 and GTX 285. Are you talking about putting 4 of these cards into one computer? In that case, the $300 power supply is the least of your budget concerns, as the cards alone will exceed that price by a factor of 10. Building such a beast is difficult and definitely can’t be done with a standard computer from Dell.

If you are talking about the S1070, that puts the 4 cards into an external 1U rackmount enclosure with its own power supply and fan. Then data cables run into your adjacent server, which does not supply any power to the CUDA devices themselves.

You have a factor of 2 off here. The maximum theoretical bandwidth for PCI-Express 2.0 is 8 GB/sec, and in practice you can see between 5 to 6.8 GB/sec depending on your motherboard. For PCI-e 1.0, the practical max is around 3.5 GB/sec. PCI-Express 1.0 and 2.0 are also cross compatible, so you can combine motherboards and cards implementing different versions, and they will autonegotiate down to the fastest standard both sides can understand. (This is also how PCI-e 3.0 will work when it comes out next year.) I have used both PCI-e 2.0 cards in a 1.0 motherboard and 1.0 cards in a 2.0 motherboard. In both cases, they automatically downgraded to 1.0 speeds.

I am referring to C1060.

The part number the power supply in Dell Precicion 670 is X1463: 650 W, the power supply in this system doesn’t have the 2 6-pin adapters required to power it. it doesn’t have an 8-pin at all too.

~700+200=~900 is what I need. Better more - Dell recommended 1,200 W.

I have an AMD phenom II 940 and a gtx260sp, and it runs on a 450W PSU. There is no way you need a 1200W psu for 1 C1060 in a desktop.
Just get the molex to 6 pin adapters and use the power supply you have is my suggestion.

Fair enough, though that seems like quite a lot of power for a single Tesla card. I regularly run a pairs of 200W cards in computers (but not Dells) with 750-850W power supplies. (I’m surprised to see that C1060 uses a 6 & 8-pin connector when the max power consumption is 188W. That amount of power could be delivered with two 6-pin connectors + the power from the card slot.)

That’s because with the C1060 you can use one eight-pin or two 6-pins, whichever is more convenient.

The problem is that according to Dell the power adapter doesn’t have the 2 6-pin and doesn’t have a 8-pin at all

it has only 1 6-pin power connector.

That means I have to replace the power adapter with more appropriate. Another problem is that Dell cannot

recommend a new one because they will recommend only tested solutions. They have not tested a power

upgrade and thus cannot recommend one. This system costed about $5,000 5 years ago and I connot just

replace the power supply without Dell confirms which one will be OK.

I have Dell server too - PowerEdge 2850. It has only PCI-X and the same problem with bandwidth if I ever

use reverse PCe-PCI-X bridge. The bandwith would be ever less.

Tesla C1060 is about $1,200. Because the Dell Precision 670 has PCIe 1.0, and I cannot have PCIe 2.0 (otherwise

I have to replace the motheboard too), I cannot utilize the full power of Tesla C1060. That means I have to buy a

new high end computer for about $2,000. Together with Tesla C1060 price it comes to ~$3,000 minimum. Which

is not justified just for development and testing.

The bottom line is I have a powerfull server and a powerfull desktop and still cannot utilize Tesla C1060 fully if not

invest another $2,000 for a new system. If you add the much more power consumption for a computer just sitting idle

most of the time, it is much less justified.

a basic Core 2 Duo or C2Quad machine with plenty of memory can use a C1060 without a problem. you can get a machine like that for much, much less than $2000. try $900.

Ahhh! Nice…

Dell is just being silly. Just replace the power supply, the machine probably isn’t under warranty anyway.

Man learns something new every day. Dell is doing overselling. Amex (authorized for Tesla reseller) is doing overselling ($2,000).

Obviously it is better doing most of the things myself as usual.

Could you please share what you have found out so far?

Have you tried using 2x4pin to 6pin convertors to power the 5800?

I am in a similar sitation with Dell 670 and FX 5800 card.

Appreciate your help


I would be glad to help.

I checked your card specifications. The important parameter is: Maximum Power Consumption=189W.

I wouldn’t risk with so powerful card installed on Dell 670. While Dell recommended to

replace the power supply with a more powerful one, I don’t wanted to risk because it is

my primary development machine. A power supply malfunction can ruin the computer.

Also some forum members suggested not to worry, but I don’t think it is wise.

Initially I have been very frustrated that I can’t use the developer’s discount for Tesla C1060.

A five years ago this workstation used to cost a lot and now it turns out it is not compatible.

Then I looked for other options, for example the mid range professional cards. There are a lot

of parameters to compare so I created a spreadsheet with main CUDA enabled cards,

their capabilities, how much they cost today, the minimum computer requirements, number

of processors, etc. One important calculation is the ROI of computing power, memory etc.

capabilities vs. money required for the card plus additional costs such as possibly new computer.

I will publish it soon because it can actually help find the best card. Because there are so

many parameters it is not an easy task. I posted here on the forum thinking it would be interesting

for the community, but got not response. But i will publish it at my blog at

Then I choose FX 1800 - a mid range professional card. The main criteria was the power consumption

with enough memory a second one. The processors are a lot so it does not matter much - it consumes

only 59 W and I had no need to upgrade the power supply.

Now it gets more interesting. After I installed the card in Dell 670, I performed several tests to see

how fast is actually the PCIe to card speed. It turned out that the PCIe interface on this motherboard

is working on half speed - 2.5-3GB/Sec one way host to device pinned memory transfer. Theoretically

it has to be 4 GB/Sec.

So actually my research and choosing FX 1800 was a good thing to do because installing four times

more powerful card I could not utilize the full power of the card, the host to device speed being the

most important parameter after the power consumption.

Now, about your card - it has 4GB memory which is a lot of memory. It can help because if you make your

algorithm to fill out the memory with data with one transfer, then leave it there and make calculations

you will circumvent the bottleneck temporarily. But you have to have a really a lot of data to justify this.

Needles to say the development takes a lot of time and during this time possibly NVIDIA can release

45 nm process cards which will consume much less power. Now I believe it is 70 nm process and

with so many transistors we get this power consumption problem.

The device to host and back speed being so important motivated me to create a benchmark program

to transfer test data in increasing increments of 1MB and to see if it is linear or exponential or whatever.

The graphic is interesting and I will publish it soon.

You can check at my

There I published also some materials from Intel Conferences about PCIe and how to choose it. It is an interesting

read and has diagrams showing how the data are transfered internally between the bus and memory, etc.

Also I had an idea to make a database of benchmarks - to collect a real world benchmarks how actually

a particular NVIDIA card is running on different computers. It would be interesting because on Dell 670

the PCIe speed is one, on different computer it would be different. I don’t know but seems to me the theoretical speed

of 8 GB/Sec both ways is unachievable. But the only way to know is to have this database. Even created a web service

if there will be an interest from the community to submit the results, we can host it and after time to have a very

interesting benchmarking database to share.

But my posting about this do not got the response I expected. I cannot have the possibility to install the same card

on many different computers as many computers I have here (and I have a lot).

Check the from time to time I will publish interesting info there.

Thanks for sharing your experience.

Well I already have the fx5800 card and need to find a way to make it work with 670 workstation.

I am tempted to try using 2x4pin to 6pin convertor and use the existing 6 pin to fire the 5800.

The problem would be to check if I can get them to be powered from same rail, a risky venture.

Other option is to buy one of the following…r_specTable.asp

But one of the reviewer at mentions this addon fried up the main board…N82E16817153070

Did you find a power supply upgrade specified by Dell or otherwise?

I spent an hour on internet chat/phone with Dell rep and they gave me nothing.


It is risky, yes.

Other option is to buy one of the following…r_specTable.asp

Yes, you this is the problem.

Exactly, I tried to save you this ;). They told me they didn’t support power supply

upgrade and cannot recommend one. Possibly because of liability reasons.

Still they told me I have to use 1,200W power supply.

No, I tried Amex, they told me too something about 1,200W and send me a quote for $300.

Now, this is too expensive and I selected the FX 1800. No problems with it.

If you want to keep your card the only way I see it for you is to buy a new computer with more

powerful power supply. Probably 800W would be enough.


Thanks thstart!

Yup more power or new computer is the only option

If you want to get a new computer better choose one with motherboard having

two 16xPCIe slots. That way you can have two video cards installed - one

for display, the other for CUDA computing. Otherwise context switching

between video mode and computing mode will slow down the CUDA

computing. We need a real world benchmarks on this too.

Also, better use Win 7 because it leaves more memory for your video card -

it changed video memory handling.