Does anyone know the pros and cons of using a deskside Tesla system (D870) rather than two C870 cards on the mobo?
CAD system component shopping on the old PIII 800
Interesting question, and though I don’t own any Tesla stuff, I’m going to think out loud.
With the Deskside unit, you know that the power supply can handle the cards, and you can use a physically smaller host computer: “Host Adapter Card: PCI Express x16 or x8, Small Form Factor, Passive (10W)”.
I’d guess that this was one of very few ways to get reasonably powerful CUDA processing into 1U.
Also, if you have two PCI-E x16 or x8 slots, you can connect two D870 desksides to get 4 GPUs in your workstation.
Ahh, thanks. I guess I should be more specific. I was really curious about why the deskside system would be 3X the price of two cards.
I’ve been just sort of guessing on this whole thing but aside from the simplification of install and power management I was curious if there is some controller on the backplane that would eliminate bus traffic that would normally occur?
The I don’t know if two pci-e cards have to go through the northbridge to communicate or if they can talk directly to one another. If they can’t talk directly on a motherboard that seems like it would explain the value of the external system as all Tesla to Tesla traffic would be removed from the PCI-E bus load.
At least currently (as of 1.1), CUDA does not yet optimize card-to-card transfers. They always occur through the host.
The 1.5GB of memory attached to each chip is probably part of the $$$, and the bragging rights. For 3X the price, it better be qualitatively different w.r.t. bus congestion and such things, as you speculate.
Edit: Oh, wait, you’re comparing Tesla to Tesla. Sorry. Yeah, I don’t understand the price difference :)