100G Speed-tests VMWare

Hi,

is there any guide for fine-tuning the Connect-x4 Cards with the MSN2700 to reach full bandwith ?

We have several HP DL380 G9 with PCI-X 16 slots and the dual 100G cards and my iperf tests doesnt seem to fully use the bandwith.

Server:

[root@host1:~] /usr/lib/vmware/vsan/bin/iperf.copy -s -B host1.datacenter.local


Server listening on TCP port 5001

Binding to local address host1.datacenter.local

TCP window size: 64.0 KByte (default)

Client:

[root@host2:~] /usr/lib/vmware/vsan/bin/iperf -m -i t300 -c host1.datacenter.local -fm -P 4

WARNING: interval too small, increasing from 0.00 to 0.5 seconds.


Client connecting to host1.datacenter.local, TCP port 5001

TCP window size: 0.03 MByte (default)


[ 6] local 10.0.220.2 port 19969 connected with 10.0.220.13 port 5001

[ 3] local 10.0.220.2 port 13381 connected with 10.0.220.13 port 5001

[ 4] local 10.0.220.2 port 58309 connected with 10.0.220.13 port 5001

[ 5] local 10.0.220.2 port 53137 connected with 10.0.220.13 port 5001

[SUM] 0.0-10.0 sec 27880 MBytes 23386 Mbits/sec

Why have all 100G if its not really getting used

Thanks

Hi Tim,

Did you go through the Tuning guide

Performance Tuning for Mellanox Adapters

https://community.mellanox.com/s/article/performance-tuning-for-mellanox-adapters

Did you try to run the iperf with taskset to specify the cores that are on the same NUMA of the adapter ?

List your numa node

mst status -v

lscpu will show you the CPU’s on your NUMA node (like example)

NUMA node0 CPU(s): 0-3

NUMA node1 CPU(s): 4-7

BR

Marc

Hi Tim,

Did you manage to get better performance ?

BR

Marc

Hi,

For mst command:

    1. Download the MFT for VMware vib package from: http://www.mellanox.com/products/management_tools.php http://www.mellanox.com/products/management_tools.php
    1. Install the package. Run:

esxcli software vib install -v MFT

esxcli software vib install -v MST

NOTE: For VIBs installation examples, please see below.

    1. Reboot system.
  1. Start the mst driver. Run:

/opt/mellanox/bin/mst start

For performance see

http://www.mellanox.com/related-docs/prod_software/Mellanox_MLNX-NATIVE-ESX-ConnectX-4-5_Driver_for_VMware_ESXi_6.5_User… http://www.mellanox.com/related-docs/prod_software/Mellanox_MLNX-NATIVE-ESX-ConnectX-4-5_Driver_for_VMware_ESXi_6.5_User_Manual_v4.16.8.8.pdf

Marc

Hi,

Can you try to run ib bandwidth test ?

ib_write_bw / ib_read_bw

Do you run on your bare metal or on a VM ?

Marc

its ethernet not infiniband

its bare metal esxi 6.5

how do i test it ?

perftest-3.4-0.9.g98a9a17.tar

Not yet done. Ive read through the guide, but didnt find a compatible package yet for VMWare.

Is there a package for VMWare ESXI 6.5 which has the tools like raw_ethernet_bw included ?

ive updated my boxes to the latest Connect-X 4 drivers from mellanox but i cant seem to execute the commands you listed before

Okay thanks,

installed and rebootet:

:/opt/mellanox/bin] ./mst status -v

PCI devices:


DEVICE_TYPE MST PCI RDMA NET NUMA

ConnectX4(rev:0) mt4115_pciconf0 88:00.0 net-vmnic6

ConnectX4(rev:0) mt4115_pciconf0.1 88:00.1 net-vmnic7

/opt/mellanox/bin] ./mlxfwmanager

Querying Mellanox devices firmware …

Device #1:


Device Type: ConnectX4

Part Number: MCX416A-CCA_Ax

Description: ConnectX-4 EN network interface card; 100GbE dual-port QSFP28; PCIe3.0 x16; ROHS R6

PSID: MT_2150110033

PCI Device Name: mt4115_pciconf0

Versions: Current Available

FW 12.16.1020 N/A

PXE 3.4.0812 N/A

I think its all automated in vmware esxi 6.5 now ?

numa.nodeAffinity

Constrains the set of NUMA nodes on which a virtual machine’s virtual CPU and memory can be scheduled.

Note:

When you constrain NUMA node affinities, you might interfere with the ability of the NUMA scheduler to rebalance virtual machines across NUMA nodes for fairness. Specify NUMA node affinity only after you consider the rebalancing issues.