I am not sure about the reason behind such a drastic throughput drop. With simple switch configuration iperf3 with the single client, I get around 45Gbps but with flow_acl I am getting around 400 Mbps.
The primary suspect is rules are not getting offloaded to NIC somehow. In my case, I have made sure to enable hw-tc-offload and ovs hw-offload. I have also tried with tc flower but no luck there as well. My general observation is when I use just single ovs bridge everything works perfectly fine but when I try using multiple ovs bridges my thoughput start dropping drastically.
Hi there, giving more cores didn’t help increasing throughput. I think since we are statically offloading rules to nic, just 2 cores are sufficient. In my opinion my issue has something to do about how I am setting ovs setup with two bridges. Please take a look at shared pdf and let me know if I am making any mistake.
sudo ovs-dpctl show
system@ovs-system:
lookups: hit:0 missed:290073 lost:171
flows: 0
masks: hit:1 total:0 hit/pkt:0.00
caches:
masks-cache: size:256
port 0: ovs-system (internal)
port 1: ovsbr1 (internal)
port 2: pf0hpf
port 3: p0
port 4: en3f0pf0sf0
port 5: ovsbr2 (internal)
port 6: ovsbr3 (internal)
port 7: en3f0pf0sf1
port 8: en3f0pf0sf2
sudo ovs-vsctl show
16266ee6-e532-4a01-be71-b9ef78f31320
Bridge ovsbr3
Port p0
Interface p0
Port en3f0pf0sf2
Interface en3f0pf0sf2
Port ovsbr3
Interface ovsbr3
type: internal
Bridge ovsbr1
Port en3f0pf0sf0
Interface en3f0pf0sf0
Port ovsbr1
Interface ovsbr1
type: internal
Bridge ovsbr2
Port pf0hpf
Interface pf0hpf
Port ovsbr2
Interface ovsbr2
type: internal
Port en3f0pf0sf1
Interface en3f0pf0sf1
ovs_version: “2.17.8-3feee121f”
Can you please explain more the difference between the cases where you see 45 Gbps and the case where you see 400 Mbps? Where are you running flow_acl when you get 400 Mbps, and what is your simple switch configuration that you see 45 Gbps on? Do you see packets reaching the DPU’s kernel when running in the 400 Mbps case (and are any CPUs pegged?)
Hello, my simple switch configuration is default one as follow, where traffic flows between two hosts connected via DPU connected to respective hosts. The rule applied to switch is as follow: “ovs-ofctl add-flow ovsbr0 action=normal”
When I try flow_acl, I am running the program on DPU which but the flow is still flowing between end-to-end hosts. More details are within the pdf file I shared when started this post chain.
Let me know what other details I can provide. I can share a pcap if you want(not sure how much useful it would be). Otherwise you can also share a script which will deploy configuration required for flow_acl on DPU. I can try to run it on my setup and then provide it’s output. for your debugging.
I have also followed quick starter guide. Tried various combinations of setting up with OVS-DPDK as shown in attachment but still not getting expected results:
In your drawing, you have p0 and pf0hpf on ovsbr1. In your configuration, it seems like they are in different bridges… can you clarify what’s going on here?
Few more questions:
If you don’t run the DOCA flow_acl program, what does your performance look like?
On the ARM cores, if you take a tcpdump on p0 and pf0hpf, do you see the packets?
Can you please share your script you are using to configure OVS?
Throughout my post, I talk about two configurations.
Simple switch: This is a default configuration. The figure I shared today demonstrates this configuration. This will give 45Gbps with a single flow.
Flow_ACL: This program is one of the samples provided with DOCA SDK. The configuration required for this application to run is inspired by this example: NVIDIA DOCA NAT Application Guide - NVIDIA Docs
(In fact I have tried this NAT example but the result is the same as flow ACL)
I have the interactive script that I have created for all the configurations I shared up until now.
If you can let me know what your ovs configuration looks like I can try to match it with that. If convenient we can set up a video call and I can run all configurations in front of you. Here is my email address: rvarde2@uic.edu