ConnectX-3 Pro: ports negotiate to a wrong protocol when the cable is plugged

Greetings to everyone!

First of all, please pardon my ignorance, as I’ve got just basic understanding of networking and my question might seem stupid. The hardware is actually OEM (HPE), but I’m writing my question here, because I find this forum really helpful.

So, here is the thing - I need to connect two servers using ConnectX-3 Pro HCA’s without IB switch to run MPI applications with RDMA in Windows Server 2012 R2. To get the IB network running, I follow normal procedures: install the driver and WinOF, create the OpenSM service. Actually, everything works properly, and MPI tests show expected latency (~2 usec) and bandwidth (~4500 MB/s). But the problems arise when the servers are rebooted - ports #1 on both HCA’s always negotiate to use Ethernet protocol when the cable is plugged during system boot, regardless of the UEFI setting (per port, which strictly corresponds to mlxconfig setting). For example, when port 1 cable is plugged and port 2 is unplugged, the configuration becomes: Port 1 - Eth, Port 2 - IB. This is how it looks from the device manager:

And I’m unable to change that, because the driver settings are grayed out:

The availability of port protocol settings for both ports depends on UEFI(=mlxconfig) setting. For the first port it also depends on cable presence. When the cable is plugged during system boot, Port 1 is always grayed out with “Eth” chosen, regardless of the UEFI(=mlxconfig) setting. Rebooting the bus driver (disable/enable IB Adapter system device in Device Manager) doesn’t help. When the cable is unplugged and port protocol is set to VPI, the setting is available. The setting also becomes available right after the cable is unplugged from a running system. And the IB network works normally on the first port until next reboot. Seems like the Windows driver ignores 1st port setting from firmware. I would greatly appreciate any ideas why that may happen. Currently the workaround is to unplug the cable from both ports #1. I suspected, that one of the cables was faulty, but switching the cables had no effect, so I guess the cable is OK.

Here is some additional info on my configuration:

Part numbers for the HCAs and cables:

764285-B21 - HP IB FDR/EN 40Gb 2P 544+FLR-QSFP Adptr

808722-B21 - HP 3M IB FDR QSFP V-series Optical Cbl

Firmware version: 2.40.5072

Driver version: 5.35.12978.0

WinOF version: 5.35

OpenSM: 3.3.11 (comes with WinOF)

WinMFT version: 4.6.0.48

mlxconfig -d mt4103_pci_cr0 query output (note the LINK_TYPE_P1 = IB, while the driver is showing ETH):

Device type: ConnectX3Pro

PCI device: mt4103_pci_cr0

Configurations: Next Boot

SRIOV_EN True(1)

NUM_OF_VFS 16

WOL_MAGIC_EN_P2 True(1)

LINK_TYPE_P1 IB(1)

LINK_TYPE_P2 IB(1)

LOG_BAR_SIZE 5

BOOT_PKEY_P1 0

BOOT_PKEY_P2 0

BOOT_OPTION_ROM_EN_P1 True(1)

BOOT_VLAN_EN_P1 False(0)

BOOT_RETRY_CNT_P1 0

LEGACY_BOOT_PROTOCOL_P1 PXE(1)

BOOT_VLAN_P1 1

BOOT_OPTION_ROM_EN_P2 True(1)

BOOT_VLAN_EN_P2 False(0)

BOOT_RETRY_CNT_P2 0

LEGACY_BOOT_PROTOCOL_P2 PXE(1)

BOOT_VLAN_P2 1

IP_VER_P1 IPv4(0)

IP_VER_P2 IPv4(0)

ibv_devinfo -v output (when Port 1 is empty):

hca_id: ibv_device0

fw_ver: 2.40.5072

node_guid: 7010:6fff:ffa8:1870

sys_image_guid: 7010:6fff:ffa8:1873

vendor_id: 0x02c9

vendor_part_id: 4103

hw_ver: 0x0

phys_port_cnt: 2

max_mr_size: 0xffffffffffffffff

page_size_cap: 0x1000

max_qp: 65472

max_qp_wr: 16351

device_cap_flags: 0x00005876

max_sge: 32

max_sge_rd: 0

max_cq: 65408

max_cqe: 4194303

max_mr: 130816

max_pd: 32764

max_qp_rd_atom: 16

max_ee_rd_atom: 0

max_res_rd_atom: 0

max_qp_init_rd_atom: 128

max_ee_init_rd_atom: 0

atomic_cap: ATOMIC_HCA (1)

max_ee: 0

max_rdd: 0

max_mw: 0

max_raw_ipv6_qp: 0

max_raw_ethy_qp: 0

max_mcast_grp: 8192

max_mcast_qp_attach: 244

max_total_mcast_qp_attach: 1998848

max_ah: 0

max_fmr: 0

max_srq: 65472

max_srq_wr: 16383

max_srq_sge: 31

max_pkeys: 128

local_ca_ack_delay: 15

port: 1

state: PORT_DOWN (1)

max_mtu: 4096 (5)

active_mtu: 4096 (5)

sm_lid: 0

port_lid: 0

port_lmc: 0x00

transport: IB

max_msg_sz: 0x40000000

port_cap_flags: 0x00005890

max_vl_num: 2 (2)

bad_pkey_cntr: 0x0

qkey_viol_cntr: 0x0

sm_sl: 0

pkey_tbl_len: 16

gid_tbl_len: 128

subnet_timeout: 0

init_type_reply: 0

active_width: 4X (2)

active_speed: 10.0 Gbps (4)

phys_state: POLLING (2)

GID[ 0]: fe80:0000:0000:0000:7010:6fff:ff

a8:1871

port: 2

state: PORT_ACTIVE (4)

max_mtu: 4096 (5)

active_mtu: 4096 (5)

sm_lid: 3

port_lid: 3

port_lmc: 0x00

transport: IB

max_msg_sz: 0x40000000

port_cap_flags: 0x00005890

max_vl_num: 2 (2)

bad_pkey_cntr: 0x0

qkey_viol_cntr: 0x0

sm_sl: 0

pkey_tbl_len: 16

gid_tbl_len: 128

subnet_timeout: 18

init_type_reply: 0

active_width: 4X (2)

active_speed: invalid speed (16)

phys_state: LINK_UP (5)

GID[ 0]: fe80:0000:0000:0000:7010:6fff:ff

a8:1872

Here it reports wrong active speed, and I haven’t figured out how to change it, but the fabric works. I’m not sure if it has anything to do with the problem.

Thanks in advance!

Best wishes,

Dmitry

By "no IB-switch” do you mean both servers are back-to-back connected?

I see both nics have IB/Eth capabilities & cables seem to be suitable as well. I also see you have configured via FW to have both ports Link type IB, which is fine

Have you restarted the driver to apply the changes?

Have you confirmed on both adapters, right after driver-restart that “HCA type configuration” in the device-manager is set properly to IB?

Did you observer in the event-viewer any error/critical events that points to any mlnx failure?

If the above suggestion has been applied and issue still persists that I suggest you apply to Mellanox support (support@mellanox.com mailto:support@mellanox.com ) to troubleshoot the case and dig into this deeper

Dear aviap, thank you for feedback!

>>By "no IB-switch” do you mean both servers are back-to-back connected?

Yes, exactly. Sorry for being unclear.

>> Have you restarted the driver to apply the changes?

Yes, at least I believe I have. In my original post I’ve mentioned, that “Rebooting the bus driver (disable/enable IB Adapter system device in Device Manager) doesn’t help”. Maybe there is another way to restart the driver?

>> Have you confirmed on both adapters, right after driver-restart that “HCA type configuration” in the device-manager is set properly to IB?

No, it’s not - that’s why I’m confused. As I have mentioned, the first port is always set to Ethernet if the cable is plugged during system boot. And the setting is grayed out. I can only change it when the cable is unplugged and firmware setting is VPI.

Maybe by “both adapters” you mean two virtual per-port adapters on a single machine, which are displayed under “network adapters” rather than “system devices”? But even if so, there is no “HCA type configuration” setting there.

>>Did you observer in the event-viewer any error/critical events that points to any mlnx failure?

Yes, I should probably have mentioned that. Here is the error message:

“According to the configuration under the “Jumbo Packets” advanced property, the MTU configured for device HP 10Gb/40Gb 2-port 544+FLR-QSFP IPoIB Adapter is 4092. The effective MTU is the supplied value + 4 bytes (for the IPoIB header). This configuration exceeds the MTU reported by OpenSM, which is 2048. This inconsistency may result in communication failures. Please change the MTU of IPoIB or OpenSM, and restart the driver.”

The thing is that I’m running OpenSM with default settings except for port binding (otherwise it binds to 1st port by default, which is “Eth”, which leads us back to the thread topic). And I haven’t figured out how to change the MTU setting for the OpenSM - there’s no such parameter (though I might be underestimating the depth of my misunderstanding). Anyways, this error seems to be gone for several days since I’ve disabled the IPv6 protocol in Windows adapter settings (might still be a coincidence).

And now that you’ve asked, I’ve also checked the warnings (which I missed initially). Here is what I think is relevant:

“SingleFunc_4_0_0: Port #1 is configured to Ethernet. Since Ethernet is not supported in this device, it will automatically be configured to IB instead. Check PortType registry key.”

I’ve checked the registry - there is a “PortType” string parameter, which equals “eth,ib”. But it sits in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class{4d36e97d-e325-11ce-bfc1-08002be10318}\0058\Parameters, which is a clear indication to me, that the developers didn’t want me to touch it. Besides, reinstalling Windows doesn’t affect the unwanted behavior, so tweaking the register doesn’t seem like a solution to me. But if you tell me to try it - I’ll take the risk:)

“HP InfiniBand FDR/Ethernet 10Gb/40Gb 2-port 544+FLR-QSFP Adapter (PCI bus 4, device 0, function: SR-IOV cannot be enabled because FW does not support SR-IOV. In order to resolve this issue please re-burn FW, having added parameters related to SR-IOV support.”

We have automatically burned the most recent firmware from HP during driver installation. And we also do not require this functionality directly - the network is intended for RDMA only to run ANSYS software via MPI (CFX and Mechanical mostly). But if you think it’s worth digging - please, let me know.

Thanks again!

Best wishes,

Dmitry

Was this ever resolved? I know this is an old post but I have exactly the same problem - in windows a 2 port IB card (HP 649283-B21) works as IB on one port only, the other defaults to Ethernet and the device manager shows that port is an ethernet card (and the other is IB). This behaviour persists regardless of setting the LINK_TYPE_P1/P2 to 1 (IB) and rebooting. OpenSM is running as a service, one service for each IB port, each specifying the GUID. The first port comes up, the second service fails with ERR 3E03, IB_INVALID_AV_HANDLE. I tried deleting the drivers for the ethernet port in the device manager but that didn’t work… ibstat shows Link Layer: Ethernet for port 2…