Question of the bandwidth of MCP1650-V003E26 when connecting to connectx-6 VPI (configure to Ethernet mode)

HI ,

I connected two Connectx-6 adapters (MCX653106A-HDA_Ax) by one 200 GbE cable (MCP1650-V003E26).

But the speed from iperf is around 80~ 97 Gb/s which is lower than HDR or 200 GbE.

After doing some tests, it looked like the speed of the cable was configured at QSFP28, not at QSFP56, also there was no information of FW through mlxcables.

I was wondering if this’s caused by FW version or SW configurations.

Could you give me any advice of this symptom? Thank you very much and looking forward to your feedback.

  1. Already configure the Connectx-6 adapters to Ethernet mode

#mlxconfig q

Device #1:


Device type: ConnectX6

Name: MCX653106A-HDA_Ax

Description: ConnectX-6 VPI adapter card; HDR IB (200Gb/s) and 200GbE; dual-port QSFP56; PCIe4.0 x16; tall bracket; ROHS R6

Device: /dev/mst/mt4123_pciconf0

#ibstatus

Infiniband device ‘mlx5_0’ port 1 status:

default gid: fe80:0000:0000:0000:0e42:a1ff:fe4a:6528

base lid: 0x0

sm lid: 0x0

state: 4: ACTIVE

phys state: 5: LinkUp

rate: 200 Gb/sec (4X HDR)

link_layer: Ethernet

  1. The mlxcables showing the cable is only QSFP 28

#mlxcables

Cable #1:


Cable name : mt4123_pciconf0_cable_0

No FW data to show

-------- Cable EEPROM --------

Identifier : QSFP28 (11h)

Technology : Copper cable unequalized (a0h)

Compliance : 40GBASE-CR4, Reserved

Attenuation : 2.5GHz: 4dB

5.0GHz: 6dB

7.0GHz: 8dB

12.9GHz: 11dB

25.78GHz: 0dB

OUI : 0x0002c9

Vendor : Mellanox

Serial number : MT2010VS04564

Part number : MCP1650-V003E26

Revision : A2

Temperature : N/A

Length : 3 m

  1. config the IP and MTU=9000

enp97s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000

inet 15.15.15.200 netmask 255.255.255.0 broadcast 15.15.15.255

  1. result, the bandwidth is far away to HDR

iperf -c 15.15.15.201 -P 8 -t 10 -w 512k


Client connecting to 15.15.15.201, TCP port 5001

TCP window size: 416 KByte (WARNING: requested 500 KByte)


[ 8] local 15.15.15.200 port 50060 connected with 15.15.15.201 port 5001

[ 6] local 15.15.15.200 port 50056 connected with 15.15.15.201 port 5001

[ 7] local 15.15.15.200 port 50058 connected with 15.15.15.201 port 5001

[ 10] local 15.15.15.200 port 50064 connected with 15.15.15.201 port 5001

[ 4] local 15.15.15.200 port 50054 connected with 15.15.15.201 port 5001

[ 3] local 15.15.15.200 port 50051 connected with 15.15.15.201 port 5001

[ 5] local 15.15.15.200 port 50050 connected with 15.15.15.201 port 5001

[ 9] local 15.15.15.200 port 50062 connected with 15.15.15.201 port 5001

[ ID] Interval Transfer Bandwidth

[ 8] 0.0-10.0 sec 11.9 GBytes 10.2 Gbits/sec

[ 6] 0.0-10.0 sec 12.4 GBytes 10.6 Gbits/sec

[ 7] 0.0-10.0 sec 12.8 GBytes 11.0 Gbits/sec

[ 10] 0.0-10.0 sec 11.6 GBytes 9.95 Gbits/sec

[ 4] 0.0-10.0 sec 12.4 GBytes 10.7 Gbits/sec

[ 3] 0.0-10.0 sec 11.4 GBytes 9.78 Gbits/sec

[ 5] 0.0-10.0 sec 12.4 GBytes 10.6 Gbits/sec

[ 9] 0.0-10.0 sec 11.8 GBytes 10.2 Gbits/sec

[SUM] 0.0-10.0 sec 96.7 GBytes 83.1 Gbits/sec

Best Regards,

Vin Huang

Hi Vin,

  1. Please make sure to align both server/client with the latest firmware and driver versions (available in our website).

  2. Please follow our performance tuning guide for Mellanox adapters

https://community.mellanox.com/s/article/performance-tuning-for-mellanox-adapters

  1. After applying all the tuning elements, please use iperf2 & taskset to test the performance.

FYI, you should use multiple threads that will be spread across different cores in order to reach full rate line bandwidth & use the CPU’s corresponding to the closest NUMA node where the HCA is installed.

For example:

Server:

#taskset -c <cpu’s> iperf -B -s -i5 -t10 -l 64k (-l 128,1024k, -w 1m, -w 2m)

Client:

#taskset -c <cpu’s> iperf -B -c -i5 -t10 -l 64k -P4 (-l 128, 1024k, -w 1m, -w 2m, -P8,10,12,16)

Regards,

Chen

Hi Hamami,

Many thanks for your feedback and advice.

I will check the firmware and driver version of the adapter and the performance tuning guide that you advise.

And I have one more question of the status of the cable.

As mentioned earlier,I connect two connectx-6 vpi adapters ( Firmware version: 20.28.1002) through MCP1650-V003E26, a 200GbE cable.

I follow the article, How to Find Cable Info on Mellanox Adapters and Switches # MFT usage.

And got the cable information below, the identifier of the cable is QSFP28. Is this correct? Since this cable can support QSFP56. Hence, I expected to see QSFP56 here…

According to your experience, is there anything abnormal on this cable information? If yes, could you share an example of the correct one with me? Any advice is much appreciated.

#mlxcables

Cable #1:


Cable name : mt4123_pciconf0_cable_0

No FW data to show

-------- Cable EEPROM --------

Identifier : QSFP28 (11h)

Technology : Copper cable unequalized (a0h)

Compliance : 40GBASE-CR4, Reserved

Attenuation : 2.5GHz: 4dB

5.0GHz: 6dB

7.0GHz: 8dB

12.9GHz: 11dB

25.78GHz: 0dB

OUI : 0x0002c9

Vendor : Mellanox

Serial number : MT2010VS04564

Part number : MCP1650-V003E26

Revision : A2

Temperature : N/A

Length : 3 m

Best Regards,

Vin Huang

Hi @Chen Hamami​

Do you have any idea of the cable information that i posted in previous message?

is there a way to that this cable to be identified as a QSPF56 ​one?

Any advice is much appreciated.

Best Regards,

Vin Huang

Hi Vim,

I’m still investigating this.

In my setup this cable is also identified as a QSPF56.

Will update soon with my findings.

Regards,

Chen

Hi Vin,

I have confirmed that QSFP28 and QSFP56 have the same identifiers. QSFP56 is market name for qsfp with 50G per lane (QSFP 200G), but no operational difference.

Regards,

Chen

Hi @Chen Hamami​

Thanks for your reply​

Do you mean, even the identifier that shows by mlxcables is QSFP28, it still can reach QSFP56 (200Gb/s).

The capability of the cable still is 200 Gb/s under identifiers QSFP28 and the bandwidth result from iperf should close to 200Gb/s. am i understanding correctly?

Best Regards,

Vin

Hi Vin,

You are correct.

Best Regards,

Chen

Hi @Chen Hamami​

Many thanks for your explanation.

Then i don’t need t worry about the cable now😊 .

Best Regards,

Vin