SR-IOV in ESXi and vSphere 6.5

Hello !

We are evaluating possibilities to upgrade our vsphere 6.0 infrastructure to v6.5 (vCenter and ESXi hosts).

We have a ScaleIO test deployment on ESXi 6.0 hosts, ConnectX-3 (MCX354A-FCBT Firmware 2.4) adapters in Ethernet configuration and SR-IOV for cluster data and storage backend communication, interconnected by SX1036 Ethernet switches.

I am trying to find out if SR-IOV with ConnectX-3 adapters is supported in ESXi 6.5 but was unsuccessful with the native driver in ESXi 6.5: no virtual function available in PCI devices list despite enabling max_vfs seemingly successfully.

Also attempted to remove the native VIBs and install OFED 2.4, but that caused the adapter to disappear completely from the Network Adapter list.

Any idea or advice ?

Thanks !

Hello Jaehoon !

There doesn’t seem to be many scaleout virtual storage solutions which leverage RDMA out there.

The reason for using SR-IOV is to attempt cut latency and I don’t think this is possible with the native inbox 6.5 drivers. An input or feedback from Mellanox would be much appreciated here.

I know another solution would be to add another adapter and pass it through directly to the SDS VMs but this would add significant complexity and costs as additional switches would be needed to accommodate the new ports. Moverover it would be a hardware overkill.

Guess we will stick to 6.0 for the time being.

Mellanox older SKUs’ support for VMWare always lags behind. This seems a voluntary action to push capex on new equipment because drivers are readily available for newer adapters.

This is not really encouraging for the existing loyal customer base…

Thanks for your input !

Now they are supported VMware ESXi 6.5 with SR-IOV ?

We want to buy 3 cards CX-4 Lx (for three servers), model MCX4121A-XCAT and connect with 2 x Cisco Nexus 3048TP-1GE (RoCE v1/2).

For me it is important because we want to buy these cards for the University for scientific research

Best Regards

Robert

Yes, Absolutely!

I found EOL list that show me a CX-3, CX-4 discontinued.

I’m waiting to release iSER for vSphere 6.5 or 6.7.

But, I think CX-5 or above will be support vSphere ethernet iSER driver from Mellanox’s future release.

Regards,

Jae-Hoon Choi

Did you configure on firmware of CX-3?

You must fid it then restart your host.

Regards,

Jae-Hoon Choi

Hi!

You are correct!

ScaleIO is very powerful HC storage solution.

I’m also think about the future of it.

If new ScaleIO can support RoCE ethernet, I’ll test for our vSphere infrastructure.

But physical & virtual(on ESXi based) deployment is very difficult with Mellsnox today.

I’ll wait a ESXi solid driver support from Mellanox.

Jaehoon Choi

Hi!

Current ScaleIO don’t support ESXi 6.5.

I saw a information new ScaleIO will release to support ESXi 6.5 on ScaleIO community.

EMC Community Network - DECN: ScaleIO 2.X and Vmware 6.5 Support | Dell US

Mellanox don’t support properly ESXi 6.5 environment, now.

If you don’t use RDMA based protocols just wait to launch new ScaleIO for ESXi 6.5 then use ESXi 6.5 inbox Ethernet only driver for build a new ScaleIO configuration.

Have a nice day…:)

No! Only CX-4 OCP (Open Computing Project) was deprecated now.

  • I correct type error that caused my misunderstand to read EOL lists.

Regards,

Jae-Hoon Choi

Is there any progress with the SRV-IOV ? We already have ESXi 6.5 u1 and still do not work: /. In native driver there is still no support for max_vfs (esxcli system module parameters list -m nmlx4_core), test on ConnectX-3 EN:

enable_64b_cqe_eqe int Enable 64 byte CQEs/EQEs when the the FW supports this

Values : 1 - enabled, 0 - disabled

Default: 0

enable_dmfs int Enable Device Managed Flow Steering

Values : 1 - enabled, 0 - disabled

Default: 1

enable_qos int Enable Quality of Service support in the HCA

Values : 1 - enabled, 0 - disabled

Default: 0

enable_rocev2 int Enable RoCEv2 mode for all devices

Values : 1 - enabled, 0 - disabled

Default: 0

enable_vxlan_offloads int Enable VXLAN offloads when supported by NIC

Values : 1 - enabled, 0 - disabled

Default: 1

log_mtts_per_seg int Log2 number of MTT entries per segment

Values : 1-7

Default: 3

log_num_mgm_entry_size int Log2 MGM entry size, that defines the number of QPs per MCG, for example: value 10 results in 248 QP per MGM entry

Values : 9-12

Default: 12

msi_x int Enable MSI-X

Values : 1 - enabled, 0 - disabled

Default: 1

mst_recovery int Enable recovery mode(only NMST module is loaded)

Values : 1 - enabled, 0 - disabled

Default: 0

rocev2_udp_port int Destination port for RoCEv2

Values : 1-65535 for RoCEv2

Default: 4791

Yes configure on firmware of CX-3 and set on driver max_vfs (other driver on ESXi 6.0).

I checked the compatibility and ConnectX-3 is not supported in VMware (click “the latest VMware driver version”)

http://www.mellanox.com/page/products_dyn?product_family=29 http://www.mellanox.com/page/products_dyn?product_family=29

Unfortunately, it did not work, I verified it

It only supports cards ConnectX-4/5

CX-4 discontinued ?, SR-IOV is supported on VMware ESXi 5.5 - 6.5:

Yes.

VMware ESXi 6.5 support SRIOV.

Here are 2 of links about Mellanox SRIOV configuration.

www.mellanox.com/related-docs/prod_software/Mellanox_MLNX-NATIVE-ESX-ConnectX-4-5_Driver_for_VMware_ESXi_5.5_and_6.0_Release_Notes_v4_15_10_3_and_4_5_10_3.pdf

www.mellanox.com/related-docs/prod_software/Mellanox_MLNX-NATIVE-ESX-ConnectX-4-5_Driver_for_VMware_ESXi_5.5&6.0_User_Manual_v4_15_10_3&4_5_10_3.pdf

Latest 4.16.10.3 driver user guide has huge blank pages.

But old 4.15.0.3 for ESXi 6.0 configuration also using on ESXi 6.5, too…:)

Regards,

Jae-Hoon Choi