Tesla compatibility

Quick question… I am about to build a new system which will include an 8x series card as well as the Tesla board. The manual states that for multi GPU systems, the GPUs have to be of the same “Core architecture”. Does this mean that the card I purchase has to be an 8800 series (1.0, same as Tesla), or can I mix a 1.1 compute capability (e.g. 8600) with the Tesla? Thanks… External Image

it has to be the same clock speed, same amount of memory and same model (ie 8800gts + 8800gts)

So, where does that leave me with the Tesla board? There aren’t any video cards w/ 1.5 Gb of onboard ram…

Is that totally true or is it like wikipedia says that it is treated the same as two cards of the same configuration of the lesser of the two cards. Well, I will find out in three weeks when I combine my 8800GTX with a telsa and I will also try it with my 8800GT. For cuda purposes it doesn’t matter since they are opperated seperately.

what kind of system do you have?

That seems like a complete waste of a Tesla board… what configuration could you use to optimize the Tesla then? Anyone from NVIDIA have a thought on the matter before I dump a truckload of cash into my new system?

unless your gonna do some Stanford DNA stuff, dont get a Tesla

buy 1 8800gt after the hoildays, its cheaper and better then the rest and in 2 months. that will save you 2-400 and you will still have a powerful machine

Well, that really doesn’t answer my question though… especially since PNY is offerning the Tesla board right now for $650. It seems illogical that they would create a board which would defacto not be able to perform at it’s potential like that… I think I am missing something

The Tesla is created for Computing. In that mode you use it seperately with it own capabilities. When you are using it for computing you don’t even use it with SLI. I don’t even know if it is useable in SLI since I don’t have a power supply yet that would support two 8800GTX class boards at the same time. That will come in the future. Right now I have a 7600 for display and the Tesla for computing. My thoughts are this if I am not using the Tesla for Computing why can’t I use it with the GTX for enhanced video. To totally match the Tesla it would take a Quadro TX5600. Have you priced them? Certainly not in my budget. But the incremental price os the Tesla over the GTX right now seems reasonable for twice the memory from the computing viewpoint. I am looking at the GT with its single slot design for possible 4 board implentations. Some motherboards now have more than two x16 slots. Meanwhile they all occupy seperate boxes.

Here is a quote from an NVIDIA administrator from a different thread… this isn’t an exact answer, but if their official demo used that config… it can’t be all bad

"I’m not sure I’d recommend trying two C870’s in an Ultra 24. The Ultra 24 is sweet - but there aren’t enough power connectors for two C870’s (you’d need 4 of external the connectors). Plus you’d still need another card to provide display.

We were demonstrating at the supercomputing conference last week an Ultra 24 with a D870 (requires only the 10Watt adapter card) and an 8800 GT (which can take advantage of the Gen2 and only uses 1 power connector, runs CUDA and gives a nice display, but only has 512MB memory, not the 1.5GB on the C870).

Or I’ve been running a Quado FX570 paired with a C870 inside the Ultra 24. The FX570 runs CUDA (not as many processors or memory) but doesn’t use an extrernal power connector, leaving the 2 available for the C870. "

Paraprased from the Tesla Manual. You must have a graphics capability already in the computer to use the Tesla board. It could be on the motherboard or another card. For windows it must me an Nvidia product recognized by the drivers. For Linux it does not even have to be an Nvidia product but the Xconfig file must still load the nvidia driver. Examples in the manual show a Quadro NVS 285 with the Tesla C870. The NVS 285 is not Cuda compatible. My on graphics card isn’t Cuda Compatible yet they still work with the Tesla C870. The only real question would be, will the Tesla C870 work with any card in SLI graphics mode? I am hoping so and would be somewhat disappointed if it didn’t but it will work in computing mode no matter what, and that is the primary reason for the purchase, and that is the purpose that it is advertised as satisfying.

Thanks, that is helpful… I am actually not interested in SLI mode, I am doing scientific computing. Apparently CUDA can’t recognize multiple GPUs in SLI mode anyways… the manual says that SLI has to be disable for CUDA to recognize the separate GPUs

If you want to do multi-GPU compute work with CUDA, you can pair ANY CUDA-capable cards. You can mix chips, memory amounts, compute capabilities, etc. SadisticMinds’ statement that “it has to be the same clock speed, same amount of memory and same model (ie 8800gts + 8800gts)” is wrong.

You will have to disable SLI in the driver to have both cards available for CUDA work.

If you’re installing a Tesla card/deskside/server, you’ll have to have another card to drive your monitor since Teslas don’t have video out. Card driving the monitor doesn’t even have to be CUDA-capable.

Paulius

Thanks for clarifying that so directly and to the point… off I got blow all my hard earned money.