OVS-DPDK Offloading VXLAN,run testpmd on VM,Tx-pps can reach 8.9Mpps, but Rx-pps only 0.7Mpps

Hello, i following the docs “OVS-DPDK Hardware Offloads”, change my bluefield to smartnic mode and offloading vxlan on nic. After that, i can ping from a VM(bluefield pf1vf0) on the 1st host to a VM(intel 82599) on the 2nd host.

But when i run testpmd,on 1st host side VM, Tx-pps can reach 8.9Mpps, but Rx-pps only 0.7Mpps。

On the nic side, i can see p1 received 74M packets within 10 seconds, about 7Mpps

ifconfig p1 && sleep 10 && ifconfig p1

p1 Link encap:Ethernet HWaddr 0c:42:a1:d7:2c:93

RX packets:779972830 errors:133 dropped:108704031 overruns:0 frame:133

TX packets:796103555 errors:0 dropped:0 overruns:0 carrier:0

p1 Link encap:Ethernet HWaddr 0c:42:a1:d7:2c:93

RX packets:854586046 errors:133 dropped:108704031 overruns:0 frame:133

TX packets:796103555 errors:0 dropped:0 overruns:0 carrier:0

But the packets received on the br-phy and br-ovs is about 0.7Mpps. which is close to the result of the Rx-pps on 1st host side VM testpmd.

ovs-ofctl dump-ports br-phy && sleep 1 && ovs-ofctl dump-ports br-phy

OFPST_PORT reply (xid=0x2): 2 ports

port p1: rx pkts=60309233, bytes=6900554986, drop=0, errs=0, frame=?, over=?, crc=?

tx pkts=7970, bytes=913092, drop=0, errs=0, coll=?

port LOCAL: rx pkts=392, bytes=34817, drop=10, errs=0, frame=0, over=0, crc=0

tx pkts=1604390, bytes=847040367, drop=0, errs=0, coll=0

OFPST_PORT reply (xid=0x2): 2 ports

port p1: rx pkts=61088861, bytes=6989771834, drop=0, errs=0, frame=?, over=?, crc=?

tx pkts=7970, bytes=913092, drop=0, errs=0, coll=?

port LOCAL: rx pkts=392, bytes=34817, drop=10, errs=0, frame=0, over=0, crc=0

tx pkts=1604390, bytes=847040367, drop=0, errs=0, coll=0

Did i missed some important step?

========

1st host side VM:

ip addr:

ens8: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1450 qdisc mq state UP group default qlen 1000

link/ether ea:53:ca:c7:cf:4d brd ff:ff:ff:ff:ff:ff

inet 20.0.0.1/24 brd 20.0.0.255 scope global ens8

valid_lft forever preferred_lft forever

SmartNIC side:

ovs-vsctl show

b5c3d34f-d58b-46ea-8542-6ba7aac4d836

Bridge br-phy

fail_mode: standalone

datapath_type: netdev

Port br-phy

Interface br-phy

type: internal

Port “p1”

Interface “p1”

type: dpdk

options: {dpdk-devargs=“0000:03:00.1”}

Bridge br-ovs

fail_mode: standalone

datapath_type: netdev

Port br-ovs

Interface br-ovs

type: internal

Port “pf1vf0”

Interface “pf1vf0”

type: dpdk

options: {dpdk-devargs=“0000:03:00.1,representor=[0]”}

Port “vxlan1”

Interface “vxlan1”

type: vxlan

options: {dst_port=“4789”, key=“100”, local_ip=“172.18.131.250”, remote_ip=“172.18.131.251”}

ovs_version: “2.12.1”

ovs-appctl dpctl/dump-flows type=offloaded

flow-dump from pmd on cpu core: 11

recirc_id(0),in_port(4),packet_type(ns=0,id=0),eth_type(0x0800),ipv4(tos=0/0x3,frag=no), packets:29646658, bytes:3379581662, used:0.780s, actions:clone(tnl_push(tnl_port(3),header(size=50,type=4,eth(dst=90:e2:ba:8a:c3:2c,src=0c:42:a1:d7:2c:93,dl_type=0x0800),ipv4(src=172.18.131.250,dst=172.18.131.251,proto=17,tos=0,ttl=64,frag=0x4000),udp(src=0,dst=4789,csum=0x0),vxlan(flags=0x8000000,vni=0x64)),out_port(1)),2)

tunnel(tun_id=0x64,src=172.18.131.251,dst=172.18.131.250,flags(-df-csum+key)),recirc_id(0),in_port(3),packet_type(ns=0,id=0),eth_type(0x0800),ipv4(frag=no), packets:269473316, bytes:17246292224, used:0.000s, actions:4

========

My configure on NIC:

echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages

ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true

ovs-vsctl set Open_vSwitch . other_config:hw-offload=true

ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-extra=“-w 0000:01:00.0,representor=[0],dv_flow_en=1,dv_esw_en=1,dv_xmeta_en=1”

systemctl restart openvswitch

ovs-vsctl add-br br-phy – set Bridge br-phy datapath_type=netdev – br-set-external-id br-phy bridge-id br-phy – set bridge br-phy fail-mode=standalone other_config:hwaddr=0c:42:a1:d7:2c:93

ovs-vsctl add-port br-phy p1 – set Interface p1 type=dpdk options:dpdk-devargs=0000:03:00.1

ip addr add 172.18.131.250/24 dev br-phy

ifconfig br-phy up

ovs-vsctl add-br br-ovs – set Bridge br-ovs datapath_type=netdev – br-set-external-id br-ovs bridge-id br-ovs – set bridge br-ovs fail-mode=standalone

ovs-vsctl add-port br-ovs pf1vf0 – set Interface pf1vf0 type=dpdk options:dpdk-devargs=0000:03:00.1,representor=[0]

ovs-vsctl add-port br-ovs vxlan1 – set interface vxlan1 type=vxlan options:local_ip=172.18.131.250 options:remote_ip=172.18.131.251 options:key=flow options:dst_port=4789

Hi,

You might check mlnx_perf output during testpmd on both sides. Also check ‘xstats’ from testpmd.

Try to understand if not using DPDK, OVS, vxlan and/or other additional layers the performance is good. It can reveal the layer where the issue starts.

Run basic TCP/IP (iperf) and RDMA (ib_read_bw, ib_write_bw, ib_send_bw) tests over p0/p1 interfaces on SmartNIC.

For further performance troubleshooting, I would suggest to open a support case with Nvidia, however that requires a valid software support contract.

Hi, in the ifconfig output, there is a lot of dropped packets. Do they counters increments during the test? You might try to run ‘mlnx_per -i ’ tool to see more information about sent/received packets, speed, drops, errors, discards, etc.

How do you run testpmd? What is running on the other side inside VM? What is the rate of the traffic sent back from VM2 to VM1?

Are you able to run simple iperf TCP/IP test between VMs? Could be that reason not on BlueField card?

Thank you for your reply.

These dropped packets do not increase during the test:

ifconfig p1 && sleep 10 && ifconfig p1

p1 Link encap:Ethernet HWaddr 0c:42:a1:d7:2c:93

inet6 addr: fe80::e42:a1ff:fed7:2c93/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:15568958629 errors:0 dropped:0 overruns:0 frame:0

TX packets:920 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:1837137118230 (1.6 TiB) TX bytes:212363 (207.3 KiB)

p1 Link encap:Ethernet HWaddr 0c:42:a1:d7:2c:93

inet6 addr: fe80::e42:a1ff:fed7:2c93/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:15630617984 errors:0 dropped:0 overruns:0 frame:0

TX packets:920 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:1844412921058 (1.6 TiB) TX bytes:212363 (207.3 KiB)

mlnx_perf -i p1

Initializing mlnx_perf…

Sampling started.

rx_vport_unicast_packets: 7,297,010

rx_vport_unicast_bytes: 831,858,684 Bps = 6,654.86 Mbps

tx_vport_unicast_packets: 822,449

tx_vport_unicast_bytes: 52,636,736 Bps = 421.9 Mbps

rx_packets_phy: 7,296,982

rx_bytes_phy: 861,044,938 Bps = 6,888.35 Mbps

rx_65_to_127_bytes_phy: 7,297,055

rx_prio0_bytes: 861,075,382 Bps = 6,888.60 Mbps

rx_prio0_packets: 7,297,249


I have already run simple iperf TCP/IP test between VMs,it can reach 6.43 Gbits/sec

2st host side VM2:

ifconfig eth0 20.0.0.1/24 up

iperf3 -s -f m

1st host side VM1:

ifconfig ens8 20.0.0.2/24 up

ifconfig ens8 mtu 1450

iperf3 -c 20.0.0.2 -P 4 -t 60 -i 1

but when run testpmd, it will slow down. My configure to run testpmd:

1st host side VM1:

echo 500 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages

./testpmd -l 0-1 -n 1 -w 0000:00:08.0 – -i --auto-start --forward-mode=rxonly

​ testpmd> show port stats all

Rx-pps: 780998

the other side host:

ovs-vsctl add-br br-ex – set bridge br-ex datapath_type=netdev

ovs-vsctl add-port br-ex dpdk0 – set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:42:00.1

ovs-vsctl add-br br-int – set bridge br-int datapath_type=netdev

ovs-vsctl add-port br-int vxlan1 – set interface vxlan1 type=vxlan options:remote_ip=172.18.131.250 options:local_ip=172.18.131.251 options:key=flow options:dst_port=4789

ifconfig br-ex 172.18.131.251 netmask 255.255.255.0

ovs-vsctl add-port br-int vm2 – set Interface vm2 type=dpdkvhostuserclient

ovs-vsctl set Interface vm2 options:vhost-server-path=“/tmp/sock0”

the other side inside VM2:

echo 300 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages

./testpmd -l 0-1 -n 1 -w 0000:00:03.0 – -i --auto-start --forward-mode=txonly

testpmd> show port stats all

Tx-pps: 6808033

what can i do next to improve the performance

I tried not to use vxlan on SMARTNIC,and it seems normal. Notice that i didn`t change the configuration of other side host.

Is the SMARTNIC work too slow in VXLAN Decapsulation Actions? Or i just make something wrong in vxlan configuration?

ovs-vsctl show

b5c3d34f-d58b-46ea-8542-6ba7aac4d836

Bridge br-ovs

fail_mode: standalone

datapath_type: netdev

Port “pf1vf0”

Interface “pf1vf0”

Port br-ovs

Interface br-ovs

type: internal

Port “p1”

Interface “p1”

ovs_version: “2.12.1”

mlnx_perf -i pf1vf0

Initializing mlnx_perf…

Sampling started.

vport_tx_packets: 7,335,992

vport_tx_bytes: 836,302,176 Bps = 6,690.41 Mbps


mlnx_perf -i p1

Initializing mlnx_perf…

Sampling started.

ch_poll: 12

ch_arm: 12

rx_packets_phy: 7,435,257

rx_bytes_phy: 877,360,916 Bps = 7,018.88 Mbps

rx_65_to_127_bytes_phy: 7,435,279

rx_prio0_bytes: 877,417,084 Bps = 7,019.33 Mbps

rx_prio0_packets: 7,435,738

ch2_poll: 6

ch2_arm: 6

ch3_poll: 6

ch3_arm: 6


Previous configuration and experimental results using vxlan

ovs-vsctl show

b5c3d34f-d58b-46ea-8542-6ba7aac4d836

Bridge br-phy

fail_mode: standalone

datapath_type: netdev

Port br-phy

Interface br-phy

type: internal

options: {n_rxq=“4”}

Bridge br-ovs

fail_mode: standalone

datapath_type: netdev

Port “pf1vf0”

Interface “pf1vf0”

Port br-ovs

Interface br-ovs

type: internal

Port “p1”

Interface “p1”

Port “vxlan1”

Interface “vxlan1”

type: vxlan

options: {dst_port=“4789”, key=“100”, local_ip=“172.18.131.250”, remote_ip=“172.18.131.251”}

ovs_version: “2.12.1”

mlnx_perf -i pf1vf0

Initializing mlnx_perf…

Sampling started.

vport_tx_packets: 829,190

vport_tx_bytes: 53,068,160 Bps = 424.54 Mbps


mlnx_perf -i p1

Initializing mlnx_perf…

Sampling started.

ch_poll: 5

ch_arm: 5

rx_vport_unicast_packets: 6,616,232

rx_vport_unicast_bytes: 754,250,448 Bps = 6,034 Mbps

tx_vport_unicast_packets: 824,772

tx_vport_unicast_bytes: 52,785,408 Bps = 422.28 Mbps

rx_packets_phy: 6,616,223

rx_bytes_phy: 780,714,432 Bps = 6,245.71 Mbps

rx_65_to_127_bytes_phy: 6,616,169

rx_prio0_bytes: 780,633,248 Bps = 6,245.6 Mbps

rx_prio0_packets: 6,615,547

ch3_poll: 5

ch3_arm: 5


You might check CPU utilization and what uses the CPU and go from there. If something goes via the kernel it will consume ARM cpu and might be slower.

Check vxlan performance without using OVS/DPDK - https://docs.mellanox.com/display/OFEDv512371/VXLAN+Hardware+Stateless+Offloads

For more extensive troubleshooting, please have a valid support contract with Nvidia/Mellanox and you’ll be able to open a support case with technical support.