Two cards in one motherboard can double the speed between nodes?

I have a Gigabyte MZ31-AR0 motherboard which has 5 PCIE Express 3.0 x16

lanes and 2 PCIE Express 3.0 x8 lanes slots. The 5 slots use NVIDIA GPUs

(Tesla K80 PCIE-Express 3.0 x16) and I would like to scale the server

(multiple nodes) and achieve network speeds between nodes using RDMA of

PCIE-Express 3.0 x16.

Is it possible to use the two PCIE Express 3.0 x8 slots to connect two

MCX353A-FCBT FDR InfiniBand cards and connect them to a Mellanox switch

(Mellanox MSX6015F-1SFS InfiniBand Switch 18x 56 Gbps FDR InfiniBand Ports

(QSFP)) to achieve twice the 56 GB/s rate hence maintain the interconnection

speed between GPUs of different nodes at PCIE Express 3.0 x16??

I hope I have been clear.

Please let me know if any clarification is necessary.

Many thanks.

Assoc. Prof. Antonis Papadakis

CEO and Founder


Tel: +357 99334791


Hi Antonis,

I am checking internally on this and will get back to you.



Hi Antonis,

After checking internally, we do not see why it wouldn’t work. However, you would have to balance the load between the two ports.



Yes, you could do this many ways as well, an entire array of single port cards and link them to the switch. However you could again purchase dual port and dare I say quad port QSFP HCL cards rated for the job and double, triple, quadruple the total throughout using one PCIe slot and then even further use up other PCIe slots with the same HCL cards you purchased to gain even furthermore total throughput.

For an example, I have an 7 slot PCIe motherboard sat on my bench, using a single QDR port card, within a single PCIe slot with the rated switch will give me 32Gbps, a dual port card 64Gbps and an quad port card 128Gbps in theory, on paper and as mentioned that could be from a single PCIe slot… I’d still have 6 free PCIe slots and could populate them in the same manner.

You could do lots of differing configurations in order to maintain keeping up with them number crunching GPUs buddy, as ever it’s also a case of how much you would be willing to spend if brought brand new, or how much time you can invest on sourcing parts at a cheaper price.

Hope this helps you out.​

The ​Mellanox ConnectX-6 VPI 200Gb/s infiniband card exchange will do you very well. I’m sure you can get these in a dual port fashion too.

I assume you have good funding, business income? I reckon these card would go for around £700+ easily but would hugely enhance the overall needs in the future rather then just adding another 56Gbps card with a lone port.

*Another notion or thought I would use as a cheaper option if I had enough lanes left for use after GPU & Infiniband utilisations would be to use an PCIe switch, multiplier card you could divide up an x16 or x8 in to many x4 down to even X1 or whatevers needed to support the single port cheaper 40/56gbps cards and array the cards that way too but that is not a plug-in and play option, you’d have to mount all these cards elsewhere sadly or fiddle about.

Myself on the other hand, I’ve a small start-up and 40gbps is roughly okay for me currently. Contact me if you ever need any help hehh