I have purchased a used Nvidia Tesla S1070 and am looking to do some CUDA and Neural network testing on it. It did not come with a host interface card (HIC) or PCI Express cables.
However, I have been reading the description of the S1070 at http://www.nvidia.com/docs/IO/43395/SP-04154-001_v02.pdf but it doesn’t specify which kind of PCI express cable I should be using.
Is there a model number or part number for the PCI Express cable that I should be using ?
Is it the Molex 74546-0813 0.5m one for PCI-E x8 ? Will that connect to a PCI-E x16 HIC card ?
These cables are not cheap and I do not want to buy the wrong one. I tried looking at the Molex part descriptions from http://www.molex.com/pdm_docs/sd/745460813_sd.pdf and http://www.molex.com/pdm_docs/sd/745461611_sd.pdf but none of them match with the diagram in the S1070 document from Nvidia.
Are there other description documents ?
Please advise. I would really appreciate the help.
I’m in a similar situation. Any of the GHIC cards will work, as will either the long or short cables.
Unfortunately, the best reference I can provide is just links to ebay sales of what you’re looking for.
An afternoon on ebay and searching around for “Nvidia Tesla” will eventually bring up more for you.
Look for the word “Leoni”
There’s also a full-height GHIC with two ports that’s 16x, but those are more rare.
Also there’s a short cable (an uncomfortably short cable) that’s harder to come by and more expensive, but they’re also only like a foot long so it’s not super useful unless you’re building a 2U rackmount system.
I re-read your post and realized I should also caution you that the individual gpus in the S1070 WERE once quite valuable, but now they’re practically irresponsible to run if you look at Wattage to Performance.
I actually have a drawer full of the S1070 gpus, I think they’re M1060’s, and the suckers use so much power I don’t see a lot of point in their use. Let me also point out they’re only barely supported by modern APIs (to include CUDA) and you may find you need to get into the weird uncharted waters of installing old drivers to get them to work, and then match that driver with an old version of the CUDA tool kit. It gets ugly real quick.
Please consider this:
If you have a workload you’re writing from scratch, do yourself a huge favor and either:
Dig around on Ebay for a used Tesla K10 which on its own performs about as well as the whole S1070 server.
Determine if you really need ECC/The whole tesla feature-set because maybe you could get a Geforce card that’s 4 time’s as powerful for the same price and consumes WAY less power. (My numbers may be fudged, and I’m thinking only of single-precision workloads) the GTX 1060 could be great for you depending on what you need.
If you need double-precision floating point, the best deal by a wide margin is jumping on ebay and getting some M2090 cards (if you can figure out how to cool them) while they guzzle electricity like it’s going out of style, they’re easily the best bang for your buck you’ll find anywhere for double-precision.