Real differences between GeForce GTX and Tesla Is there more than what's stated on the specs pag


I am comparing the GeForce GTX 285 and the Tesla S1070 for suitability for my CUDA-enabled project. My question is, are there more differences between them than just the features listed in their spec sheets? Such as for example:

    Suitability for server-side environments of one card vs. the other one

    Availability of drivers for a larger number of OSs for one card than for the other one

    Any “soft” quality (that can’t be captured in a spec sheet) that makes one card stand out compared to the other one


Here’s the facts, taken from their respective product pages:

GeForce GTX 285 (Price: ~$400)

of Tesla GPUs: 1

Processor Cores : 240

Single Precision floating point performance (peak): N/A

Double Precision floating point performance (peak): N/A

Processor Clock (MHz): 1476Â MHz

Memory Clock (MHz): 1242Â MHz

Standard Memory Config: 1 Â GBÂ GDDR3

Memory Interface Width: 512-bitÂ

Memory Bandwidth (GB/sec): 159

Tesla S1070 (Price: ~$1300)


of Tesla GPUs: 1

Processor Cores : 240

Single Precision floating point performance (peak): 933 GFlops

Double Precision floating point performance (peak): 78 GFlops

Processor Clock (MHz): 1300 MHz

Memory Clock (MHz): 800 MHz

Standard Memory Config: 4Gb

Memory Interface Width: 512-bit

Memory Bandwidth (GB/sec): 102 GB/sec

Thank you.

You might want to try again. I think you will find you have the S1070 and C1060 (and their prices) somewhat mixed up…

Yeah, it looks like you have the price of the Tesla C1060 (a card roughly the same size as the GTX 285), but the specs of the S1070 (which is a 1U rackmount enclosure with four C1060 inside) listed.

All CUDA drivers run both the Tesla and the GeForce cards, so there is no difference there. Both cards have the same compute capability (1.3). The Tesla cards have 4 GB RAM per card, although you can now get a GTX 285 with 2 GB of RAM. In addition, the Tesla card has no video connector.

Probably the biggest “soft” difference is that NVIDIA says the Tesla is better for computation because of greater quality control for 24/7 use. Many people use GeForce cards in 24/7 configurations, so for a small setup I’d still go with the GeForce. Just be sure to go with a reputable card maker (EVGA is my favorite), and stay away from the overclocked gamer cards.

If you want to add CUDA capability to a cluster, the S1070 (which probably costs ~$10k) is very handy because it comes with its own power and cooling for the cards. The CUDA devices are then connected to your existing rackmount servers via an external cable.

Thanks for that, and for pointing out the mistake in the original post too. That answers my questions. I have also edited the original post to reflect the real specs, for future reference.

An interesting question, but is this true? In particular a question I’ve always had is whether you can use a Tesla (which has no display out) as an OpenGL rendering destination, or whether the OpenGL driver itself is missing since there’s no display out.


the three main differences between the two cards that I am aware of are (GeForce vs Tesla):

  • RAM (1GB | 4GB)

  • DVI connectors (2 connectors | can’t drive any monitor)

  • NVIDIA support (apparently the Tesla allows you to get advance technical support from nvidia (someone from nvidia should be able to elaborate more on this)


I also didn’t expect this, but that’s the impression I got in another thread discussing this. (Don’t have a link at the moment.)

Running OpenGL applications on Tesla hardware is possible, although not supported by NVIDIA (you’ll have to buy Quadro for OpenGL support).

If you configure the X server with:
nvidia_xconfig --virtual 800x600 --use-display-device=none -c /root/

you get a virtual X display, on which you can run opengl/shader programs by setting the DISPLAY environment variable.