Is it possible to use all 40 PCI-E lanes on X79 workstation with Tesla S1070?

Hi, folks!

I’m planning to use X79 (Sandy Bridge-E) chipset-based workstation as a host system to Tesla S1070. I want to have max I/O between Tesla and WS, so I read about X79 and it has 3 PCI-E 3.0 ports, but I also need a (bare) graphics card to interact with the host. As X79 has 40 lanes, and Tesla uses 2 x16 host interface cards, potentially I could put an x8 card there.

So the question is: can it downgrade performance to x8, and what would you recommend as a remedy to this?

Thanks in advance!

Just knowing the chipset is generally not enough to say what is possible to plug into a particular system. You need to know the specifics of the motherboard (design) in terms of how it routes PCIE lanes from the chipset to physical PCIE slots. BIOS can impact things as well.

To answer the question about whether a GPU can be used with 8 lanes, in general the answer is yes. This assumes the motherboard routes those 8 lanes to a x16 physical(mechanical), x8 electrical slot, and that the BIOS supports that configuration.

There can also be system BIOS (compatibility) issues when putting a large number (5 in this case) of GPUs in a system.

Note the statement in the data sheet:

[url]Page Not Found | NVIDIA

“NVIDIA recommends using systems tested
for compatibility with Tesla 1U systems.”

S1070 is really old (2008). You might want to investigate newer GPU solutions. The S1070 system will not operate at PCIE 3.0 speeds. Those links would run at no higher than PCIE 2.0 speeds.

Tesla S1070 delivered about 4 Teraflop of single-precision performance (only about 0.3TF double precision), aggregated amongst the 4 GPUs. A single K20c or K40c could provide approximately this level of performance, in a single slot, in a newer GPU architecture with more features, with lower power consumption and complexity, and possibly fewer compatibility issues.

txbob,
thank you so much for a promt and thorough reply!

I know that S1070 is already a grandma¶, but it’s so cheap on the market (300$ vs 2000$ for K20), so I thought it’s a good balance of price / learning curve / performance

I thought that internal S1070 PCI Switch does all the work of “hiding” 4 GPUs, so the host only sees 2 "GPU"s, each behind HCI. Is it wrong?

No, you’ll see 4 GPU devices enumerated in PCI config space, and 4 devices will show up either in deviceQuery or nvidia-smi. This assumes of course that you have both cables plugged into the same system.

Okay, got it! Thanks a lot!