DOCA FLOW APP brings lower packet rates!

I encountered an issue while running the NAT application on BF3. I configured the ports according to the documentation: added SF2 and p0 to one bridge, and SF3 and pf0hpf to another bridge. Using testpmd’s txonly mode to send packets, as shown in the diagram, I observed a significant traffic drop between SF3 and SF2, although NAT functionality works normally. What could be causing this? Whether the lack of certain configurations leads to the flow table not being offloaded.
OVS: pf0hpf->SF3->nat->SF2->p0

I conducted further experiments by modifying the example of simple_fwd_vnf to utilize its simple forwarding function, receiving and forwarding traffic. My network setup was: host → pf0hpf → sf5 → simple_fwd_vnf → sf4 → p0.

I observed a peculiar phenomenon: when using testpmd to send packets while simple_fwd_vnf was running on the receiving end, if I set a larger packet length (e.g., 1024 bytes) with a packet rate around 11 Mpps, simple_fwd_vnf could operate normally. The traffic statistics for each port were as follows:

tx:
17:29:03 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
17:29:04 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
17:29:04 oob_net0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
17:29:04 tmfifo_net0 9.00 14.00 0.86 4.77 0.00 0.00 0.00 0.00
17:29:04 p0 11635037.00 11725124.00 11680483.47 11770926.79 0.00 0.00 0.00 96.43
17:29:04 p1 1.00 0.00 0.12 0.00 0.00 0.00 1.00 0.00
17:29:04 pf0hpf 11725105.00 11634970.00 11725105.00 11634970.00 0.00 0.00 0.00 96.05
17:29:04 en3f0pf0sf2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
17:29:04 enp3s0f0s2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
17:29:04 en3f0pf0sf3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
17:29:04 enp3s0f0s3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
17:29:04 ovs-doca 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
17:29:04 ovsbr1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

rx
21:01:13 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
21:01:14 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:01:14 oob_net0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:01:14 tmfifo_net0 8.00 8.00 0.52 4.93 0.00 0.00 0.00 0.00
21:01:14 p0 11907254.00 11825841.00 11953761.69 11871985.96 0.00 0.00 0.00 97.93
21:01:14 p1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:01:14 ovs-doca 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:01:14 pf0hpf 11836601.00 11836630.00 11836601.00 11836630.00 0.00 0.00 0.00 96.97
21:01:14 pf1hpf 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:01:14 en3f0pf0sf0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:01:14 enp3s0f0s0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:01:14 en3f1pf1sf0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:01:14 enp3s0f1s0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:01:14 en3f0pf0sf4 11857532.00 11937010.00 11857532.00 11937010.00 0.00 0.00 0.00 97.79
21:01:14 enp3s0f0s4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:01:14 en3f0pf0sf5 11878084.00 11878104.00 11878084.00 11878104.00 0.00 0.00 0.00 97.31
21:01:14 enp3s0f0s5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:01:14 br1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:01:14 br2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

However, when I reduced the packet length to even 900 bytes, the packet rate increased to 13 Mpps, and simple_fwd_vnf failed to function properly. The traffic between sf5 and sf4 experienced a sharp drop. The traffic statistics for each port were as follows:

tx:
17:31:31 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
17:31:32 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
17:31:32 oob_net0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
17:31:32 tmfifo_net0 5.00 5.00 0.32 4.16 0.00 0.00 0.00 0.00
17:31:32 p0 60998.00 13774383.00 53848.24 12160196.59 0.00 0.00 0.00 99.62
17:31:32 p1 2.00 0.00 0.21 0.00 0.00 0.00 2.00 0.00
17:31:32 pf0hpf 13774472.00 61071.00 12106469.53 53674.89 0.00 0.00 0.00 99.18
17:31:32 en3f0pf0sf2 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
17:31:32 enp3s0f0s2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
17:31:32 en3f0pf0sf3 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
17:31:32 enp3s0f0s3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
17:31:32 ovs-doca 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
17:31:32 ovsbr1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

rx:
21:04:05 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
21:04:06 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:04:06 oob_net0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:04:06 tmfifo_net0 29.00 43.00 1.87 8.30 0.00 0.00 0.00 0.00
21:04:06 p0 13895777.00 61263.00 12267367.40 54083.74 0.00 0.00 0.00 100.49
21:04:06 p1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:04:06 ovs-doca 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:04:06 pf0hpf 64259.00 64260.00 56477.64 56478.52 0.00 0.00 0.00 0.46
21:04:06 pf1hpf 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:04:06 en3f0pf0sf0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:04:06 enp3s0f0s0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:04:06 en3f1pf1sf0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:04:06 enp3s0f1s0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:04:06 en3f0pf0sf4 60122.00 12221279.00 52841.60 10741358.50 0.00 0.00 0.00 87.99
21:04:06 enp3s0f0s4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:04:06 en3f0pf0sf5 62652.00 62644.00 55065.23 55058.20 0.00 0.00 0.00 0.45
21:04:06 enp3s0f0s5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:04:06 br1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:04:06 br2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

Below is some relevant information:

root@localhost:/home/ubuntu# sudo ovs-dpctl show
system@ovs-system:
lookups: hit:402709 missed:5390360 lost:4637436
flows: 1
masks: hit:5811697 total:1 hit/pkt:1.00
cache: hit:401769 hit-rate:6.94%
caches:
masks-cache: size:256
port 0: ovs-system (internal)
port 1: p1
port 2: ovsbr2 (internal)
port 3: p0
port 4: br1 (internal)
port 5: br2 (internal)
port 6: pf0hpf
port 7: pf1hpf
port 8: en3f0pf0sf2
port 9: en3f0pf0sf3

root@localhost:/opt/mellanox/doca/applications# sudo ovs-vsctl list Open_vSwitch
_uuid : a0c32fa2-e289-4a0f-bbf5-627e64cbbba1
bridges : [086cbd16-1d65-4282-838c-09655fca09de, a1c1d2ea-895d-4808-992c-2f8a3ef98595]
cur_cfg : 377
datapath_types : [doca, netdev, system]
datapaths : { }
db_version : “8.5.1”
doca_initialized : true
doca_version : “3.0.0058”
dpdk_initialized : true
dpdk_version : “MLNX_DPDK 22.11.2504.1.0”
external_ids : {hostname=localhost.localdomain, rundir=“/var/run/openvswitch”, system-id=“206ee97f-1ad9-4f0d-b6db-963b74694411”}
iface_types : [bareudp, doca, docavdpa, docavhostuser, docavhostuserclient, dpdk, dpdkvhostuser, dpdkvhostuserclient, erspan, geneve, gre, gtpu, internal, ip6erspan, ip6gre, lisp, patch, srv6, stt, system, tap, vxlan]
manager_options :
next_cfg : 377
other_config : {doca-init=“true”, hw-offload=“true”}
ovs_version : “3.0.0-0056-25.04-based-3.3.5”
ssl :
statistics : {}
system_type : ubuntu
system_version : “22.04”

root@localhost:/opt/mellanox/doca/applications# ovs-vsctl show
a0c32fa2-e289-4a0f-bbf5-627e64cbbba1
Bridge br2
datapath_type: netdev
Port pf0hpf
Interface pf0hpf
type: dpdk
options: {n_rxq=“32”, n_txq=“32”}
Port en3f0pf0sf5
Interface en3f0pf0sf5
type: dpdk
options: {n_rxq=“32”, n_txq=“32”}
Port br2
Interface br2
type: internal
Bridge br1
datapath_type: netdev
Port en3f0pf0sf4
Interface en3f0pf0sf4
type: dpdk
options: {n_rxq=“32”, n_txq=“32”}
Port br1
Interface br1
type: internal
Port p0
Interface p0
type: dpdk
options: {n_rxq=“32”, n_txq=“32”}
ovs_version: “3.0.0-0056-25.04-based-3.3.5”

Hi 2652510143,

You can try to use Doca Flow Tune Tool to optimize the DOCA Flow application throughout: https://docs.nvidia.com/doca/sdk/doca+flow+tune+tool/index.html.

If issue persists, please contact doca-feedback@nvidia.com for further support.

Regards,

Quanying