cable issue to build infiniband

Hello,

We have a problem that we want to build the infiniband. But we can’t reach the rational speed of 40Gps. We just reach 6.5Gps. We subscribe some websites on the mellanox community, and they suggest that it may be the cable problem. However, we haven’t dedicated infiniband-cable yet. So Must we use the dedicated infiniband-cable?? Can we just use some non-infiniband-cables that support the speed of 40Gps?

Our adapter is Mellanox ConnectX-3 Adapters (link speed: 40Gps). But our active-speed is just 10Gps. We have no idea about the meaning of active-speed ? We search this word(active-speed) on the Internet, but we fail to find the appropriate explaination, Can someone explain it to us?

Thank you for your help. We have already changed the appropriate cable as well as change the slot to inappropriate PCI-E slot. And it worked, we have already reached the rational speed.

First, the active-speed might relate to the 10Gb-per-lane, i.e. 4*10Gb = 40Gb (havnt double-checked this though)

Some initial questions related to your tests:

How to you test when the result is 6.5 Gb/s ?

  • Server OS?

  • point-to point between two servers, or both server attached to an infiniband switch (which type)

  • IP over infiniband (IPoIB), and a test tool like iperf

or an RDMA performance tool?

Please run commands, and provide the output:

ibv_devices # shows the installed IB devices

ibv_devinfo <devicename> # shows port types and MTU values

ibstat # shows the link status and actual IB rate

ip a l ( or ifconfig -a)

Additional Info: If the test result 6.5 Gb/s is related to testing over IPoIB, then it matters a lot which MTU size the IPoIB pkey_subnet is configured to use ( configured in the subnet manager, often run in the IB switch or in one of the servers if using point-to-point connection without a switch)

In IPoIB Connected Mode, MTU size if 64KB is allowed, compaired to 4KB for non-Connected Mode

The server hardware is also important, like which kind of PCI bus and how fast it is.

Infiniband (QDR) is 8/10 encoded, so your total usable bandwidth will be 32gbps,

Due to the ineffectivnets of the IP stack, I would guess that IPoIB on QDR has a maximum of 20-25 Gb/s when conifgured optimal. (pci bus, MTU, Connected/datagram)

These are excellent suggestions, after the cable question is addressed. I assume the cable is actually a QSFP 40GbE cable?

Using a non-InfiniBand cable for 40Gbs (QDR) InfiniBand is a bad idea. When QDR InfiniBand was introduced, cable suppliers actually had two different sets of cable products, one for 40Gbs Ethernet and one for 40Gbs InfiniBand, and these two types had slightly different specifications. Mellanox is the only manufacturer that makes a single cable type supporting both 40Gbs InfiniBand and 40Gbs Ethernet-- these are called ‘Virtual Protocol Interconnect’ or ‘VPI’ cables. They are not that expensive and can be found on-line or in the Mellanox Store. Other reputable vendors also sell QDR InfiniBand cables, but don’t expect their 40GbE cables to work for InfiniBand.

(Beware-- many cable vendors are happy to sell cables that won’t work. I recently saw 100-foot HDMI cables for sale, which is much longer than what the HDMI spec supports.)

(Now that Ethernet and InfiniBand run at 100Gbs, it’s no longer feasible to make cables that handle both Ethernet and InfiniBand. The cable specifications have diverged even further, on higher level issues such as error correction strategies.)

Thank you for your help. We have already changed the appropriate cable as well as change the slot to inappropriate PCI-E slot. And it worked, we have already reached the rational speed.