Bluefield-2 DOCA FLOW Sample wont run on DPU?

Just getting started with BlueField-2 & DOCA, trying to run one of the sample apps just to “see it work” and get a baseline before really digging into things.

The sample built fine but won’t run. The sample is “flow_hairpin”.

Perhaps someone can point me in the right direction?

Here are the details and error message:

The card has been setup thusly:

root@localhost:/opt/mellanox/doca/samples/doca_flow/flow_hairpin# mlxconfig -d 0000:03:00.0 s PF_BAR2_ENABLE=0 PER_PF_NUM_SF=1 PF_TOTAL_SF=236

Device #1:

Device type: BlueField2
Name: MBF2H332A-AEEO_Ax_Bx
Description: BlueField-2 P-Series DPU 25GbE Dual-Port SFP56; PCIe Gen4 x8; Crypto Enabled; 16GB on-board DDR; 1GbE OOB management; HHHL
Device: 0000:03:00.0

Configurations: Next Boot New
PF_BAR2_ENABLE False(0) False(0)
PER_PF_NUM_SF True(1) True(1)
PF_TOTAL_SF

The sample is being run with this command line:

./build/doca_flow_hairpin -a auxiliary:mlx5_core.sf.2,dv_flow_en=2 -a auxiliary:mlx5_core.sf.3,dv_flow_en=2 – -l 60

The error message that appears to be at the root of the issue is:

“mlx5_net: [mlx5dr_action_create_generic]: Cannot create HWS action since HWS is not supported”

And the full run output is:

root@localhost:/opt/mellanox/doca/samples/doca_flow/flow_hairpin# ./build/doca_flow_hairpin -a auxiliary:mlx5_core.sf.2,dv_flow_en=2 -a auxiliary:mlx5_core.sf.3,dv_flow_en=2 – -l 60
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode ‘PA’
EAL: Probing VFIO support…
EAL: VFIO support initialized
EAL: No legacy callbacks, legacy socket not created
[00:54:05:578715][DOCA][DBG][NUTILS:507]: Port 0 MAC: 02 7a db 5b 5e 7c
[00:54:05:651509][DOCA][DBG][NUTILS:507]: Port 1 MAC: 02 ab 52 2b 30 a9
[00:54:05:653878][DOCA][INF][engine_model:73]: engine model defined with mode=vnf
[00:54:05:653909][DOCA][INF][engine_model:75]: engine model defined with nr_pipe_queues=8
[00:54:05:653929][DOCA][INF][engine_model:76]: engine model defined with pipe_queue_depth=0
[00:54:05:654180][DOCA][INF][engine_field_mapping:96]: Engine field mapping initialized with 3 focus 12 protocols
[00:54:05:654215][DOCA][INF][engine_shared_resources:94]: Engine shared resources initialized successfully
[00:54:05:654250][DOCA][INF][dpdk_engine:437]: queue depth is zero, set it to default 128.
[00:54:05:654307][DOCA][INF][encap_table:119]: encap table created
[00:54:05:654417][DOCA][DBG][dpdk_table_hws:870]: Initialized dpdk table work module to be HW steering
[00:54:05:654443][DOCA][INF][dpdk_table:70]: Initializing dpdk table successfully
[00:54:05:654463][DOCA][DBG][dpdk_flow_hws:33]: Initialized dpdk flow work module to be HW steering
[00:54:05:654487][DOCA][INF][dpdk_flow:82]: Initializing dpdk flow successfully
[00:54:05:654513][DOCA][INF][engine_shared_resources:133]: Allocated 16 shared resources of type 2
[00:54:05:654533][DOCA][INF][dpdk_resource_manager:184]: Dpdk resource manager register completed
[00:54:05:654585][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.outer.eth.dst_mac, offset=0)
[00:54:05:654614][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.outer.eth.src_mac, offset=6)
[00:54:05:654634][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.outer.eth.type, offset=12)
[00:54:05:654659][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.inner.eth.dst_mac, offset=0)
[00:54:05:654680][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.inner.eth.src_mac, offset=6)
[00:54:05:654700][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.inner.eth.type, offset=12)
[00:54:05:654720][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.outer.eth_vlan.tci, offset=0)
[00:54:05:654745][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.inner.eth_vlan.tci, offset=0)
[00:54:05:654766][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.outer.ipv4.src_ip, offset=12)
[00:54:05:654782][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.outer.ipv4.dst_ip, offset=16)
[00:54:05:654804][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.outer.ipv4.next_proto, offset=9)
[00:54:05:654824][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.inner.ipv4.src_ip, offset=12)
[00:54:05:654849][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.inner.ipv4.dst_ip, offset=16)
[00:54:05:654871][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.inner.ipv4.next_proto, offset=9)
[00:54:05:654891][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.outer.ipv6.src_ip, offset=8)
[00:54:05:654910][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.outer.ipv6.dst_ip, offset=24)
[00:54:05:654935][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.outer.ipv6.next_proto, offset=6)
[00:54:05:654955][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.inner.ipv6.src_ip, offset=8)
[00:54:05:654975][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.inner.ipv6.dst_ip, offset=24)
[00:54:05:654999][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.inner.ipv6.next_proto, offset=6)
[00:54:05:655020][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.outer.udp.src_port, offset=0)
[00:54:05:655040][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.outer.udp.dst_port, offset=2)
[00:54:05:655059][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.inner.udp.src_port, offset=0)
[00:54:05:655083][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.inner.udp.dst_port, offset=2)
[00:54:05:655109][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.outer.tcp.src_port, offset=0)
[00:54:05:655129][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.outer.tcp.dst_port, offset=2)
[00:54:05:655154][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.outer.tcp.flags, offset=13)
[00:54:05:655173][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.inner.tcp.src_port, offset=0)
[00:54:05:655193][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.inner.tcp.dst_port, offset=2)
[00:54:05:655218][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.inner.tcp.flags, offset=13)
[00:54:05:655239][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.tunnel.vxlan.vni, offset=4)
[00:54:05:655263][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.tunnel.gre.key, offset=0)
[00:54:05:655288][DOCA][DBG][dpdk_layer:58]: Registered dpdk field opcode=match.packet.tunnel.gtp.teid, offset=4)
[00:54:05:655307][DOCA][INF][dpdk_layer:260]: Dpdk layer register completed
[00:54:05:655331][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.outer.eth.dst_mac, offset=42, len=6)
[00:54:05:655356][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.outer.eth.src_mac, offset=36, len=6)
[00:54:05:655381][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.outer.eth.type, offset=48, len=2)
[00:54:05:655403][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.inner.eth.dst_mac, offset=150, len=6)
[00:54:05:655423][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.inner.eth.src_mac, offset=144, len=6)
[00:54:05:655448][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.inner.eth.type, offset=156, len=2)
[00:54:05:655470][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.outer.eth_vlan.tci, offset=50, len=2)
[00:54:05:655491][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.inner.eth_vlan.tci, offset=158, len=2)
[00:54:05:655515][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.outer.ipv4.src_ip, offset=56, len=4)
[00:54:05:655536][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.outer.ipv4.dst_ip, offset=76, len=4)
[00:54:05:655555][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.outer.ipv4.next_proto, offset=92, len=1)
[00:54:05:655576][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.inner.ipv4.src_ip, offset=164, len=4)
[00:54:05:655600][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.inner.ipv4.dst_ip, offset=184, len=4)
[00:54:05:655621][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.inner.ipv4.next_proto, offset=200, len=1)
[00:54:05:655645][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.outer.ipv6.src_ip, offset=56, len=16)
[00:54:05:655667][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.outer.ipv6.dst_ip, offset=76, len=16)
[00:54:05:655689][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.outer.ipv6.next_proto, offset=92, len=1)
[00:54:05:655713][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.inner.ipv6.src_ip, offset=164, len=16)
[00:54:05:655734][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.inner.ipv6.dst_ip, offset=184, len=16)
[00:54:05:655755][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.inner.ipv6.next_proto, offset=200, len=1)
[00:54:05:655779][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.outer.udp.src_port, offset=94, len=2)
[00:54:05:655800][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.outer.udp.dst_port, offset=96, len=2)
[00:54:05:655820][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.inner.udp.src_port, offset=202, len=2)
[00:54:05:655846][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.inner.udp.dst_port, offset=204, len=2)
[00:54:05:655867][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.outer.tcp.src_port, offset=94, len=2)
[00:54:05:655887][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.outer.tcp.dst_port, offset=96, len=2)
[00:54:05:655911][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.outer.tcp.flags, offset=93, len=1)
[00:54:05:655934][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.inner.tcp.src_port, offset=202, len=2)
[00:54:05:655954][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.inner.tcp.dst_port, offset=204, len=2)
[00:54:05:655978][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.inner.tcp.flags, offset=201, len=1)
[00:54:05:655998][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.tunnel.vxlan.vni, offset=104, len=3)
[00:54:05:656018][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.tunnel.gre.key, offset=108, len=4)
[00:54:05:656038][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.tunnel.gre.protocol, offset=106, len=2)
[00:54:05:656063][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.tunnel.gtp.teid, offset=104, len=4)
[00:54:05:656083][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.tunnel.nisp.hdr, offset=104, len=40)
[00:54:05:656108][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.tunnel.audp.hdr, offset=104, len=24)
[00:54:05:656128][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.tunnel.esp.spi, offset=104, len=4)
[00:54:05:656149][DOCA][DBG][doca_flow_layer:66]: Registered field opcode=match.packet.tunnel.esp.sn, offset=108, len=4)
[00:54:05:656213][DOCA][INF][doca_flow_layer:466]: Doca flow layer initialized
[00:54:05:656234][DOCA][INF][doca_flow:526]: Doca flow initialized successfully
[00:54:05:657054][DOCA][INF][utils_hash_table:123]: hash table a_tmplt_t port 0 created
[00:54:05:657130][DOCA][INF][utils_hash_table:123]: hash table p_tmplt_t port 0 created
[00:54:05:657208][DOCA][INF][utils_hash_table:123]: hash table dpdk_tbl_mgr port 0 created
[00:54:05:657335][DOCA][INF][utils_hash_table:123]: hash table grp_fwd port 0 created
[00:54:05:657358][DOCA][INF][dpdk_port:167]: Dpdk port 0 initialized successfully with 9 queues
mlx5_net: [mlx5dr_action_create_generic]: Cannot create HWS action since HWS is not supported
[00:54:05:714997][DOCA][ERR][dpdk_flow_hws_legacy:143]: failed to configure flow hws port 0 - rte flow configure, type 1 message: fail to configure port
[00:54:05:715066][DOCA][ERR][dpdk_engine:1694]: failed to start port 0 - init port, ret=-1
[00:54:05:715146][DOCA][INF][utils_hash_table:151]: hash table destroyed
[00:54:05:715581][DOCA][INF][utils_hash_table:151]: hash table destroyed
[00:54:05:715639][DOCA][INF][utils_hash_table:151]: hash table destroyed
[00:54:05:715673][DOCA][INF][utils_hash_table:151]: hash table destroyed
[00:54:05:715692][DOCA][INF][dpdk_port:230]: Dpdk port 0 destroyed successfully with 9 queues
[00:54:05:715723][DOCA][ERR][flow_common:82]: Failed to start port - dpdk port start failed (0)
[00:54:05:715747][DOCA][ERR][FLOW_HAIRPIN:139]: Failed to init DOCA ports
[00:54:05:715774][DOCA][INF][doca_flow_layer:478]: Doca flow layer destroyed
[00:54:05:715794][DOCA][INF][dpdk_resource_manager:191]: Dpdk resource manager unregister completed
[00:54:05:715814][DOCA][INF][dpdk_flow:205]: Cleanup dpdk flow
[00:54:05:715833][DOCA][DBG][dpdk_flow_hws:69]: Cleanup dpdk flow HW steering module
[00:54:05:715852][DOCA][INF][dpdk_table:77]: Cleanup dpdk table
[00:54:05:715871][DOCA][DBG][dpdk_table_hws:877]: Cleanup dpdk table HW steering module
[00:54:05:715890][DOCA][INF][dpdk_layer:272]: Dpdk layer unregister completed
[00:54:05:715917][DOCA][INF][dpdk_resource_manager:191]: Dpdk resource manager unregister completed
[00:54:05:715936][DOCA][INF][dpdk_flow:205]: Cleanup dpdk flow
[00:54:05:715954][DOCA][DBG][dpdk_flow_hws:69]: Cleanup dpdk flow HW steering module
[00:54:05:715972][DOCA][INF][dpdk_table:77]: Cleanup dpdk table
[00:54:05:715990][DOCA][DBG][dpdk_table_hws:877]: Cleanup dpdk table HW steering module
[00:54:05:716011][DOCA][INF][dpdk_layer:272]: Dpdk layer unregister completed
[00:54:05:716037][DOCA][INF][encap_table:136]: encap table destroyed
[00:54:05:716060][DOCA][INF][engine_shared_resources:243]: Cleanup 16 shared resources of type 2 completed
[00:54:05:716082][DOCA][INF][engine_field_mapping:104]: Engine field mapping destroyed
[00:54:05:716105][DOCA][INF][engine_model:150]: engine model destroyed
[00:54:05:716125][DOCA][INF][doca_flow:542]: Doca flow destroyed
[00:54:05:716144][DOCA][ERR][FLOW_HAIRPIN::MAIN:72]: flow_hairpin sample encountered errors
Tx port 0 is already stopped
[00:54:05:716219][DOCA][ERR][NUTILS:104]: Failed to bind hairpin queues (-16)
[00:54:05:716242][DOCA][ERR][NUTILS:191]: Disabling hairpin queues failed: err=21, port=0
Tx port 0 is already stopped
[00:54:05:716294][DOCA][ERR][NUTILS:117]: Failed to bind hairpin queues (-16)
[00:54:05:716316][DOCA][ERR][NUTILS:191]: Disabling hairpin queues failed: err=21, port=1
Device with port_id=0 already stopped
Segmentation fault (core dumped)
root@localhost:/opt/mellanox/doca/samples/doca_flow/flow_hairpin#

Hopefully I am just missing something simple…!

Thanks!
-J

Hi,

I’ve got the same issue, I just updated a firmware using mlxofedinstall and It helps to me. Can you share firmware version?

Hi Yavtuk -

Thanks for the reply. I believe our firmware is current. Here are the version details…

root@localhost:/opt/mellanox/doca/samples/doca_flow/flow_hairpin# flint -d /dev/mst/mt41686_pciconf0 query
Image type: FS4
FW Version: 24.37.1300
FW Version(Running): 24.35.2000
FW Release Date: 11.5.2023
Product Version: 24.35.2000
Rom Info: type=UEFI Virtio net version=21.4.10 cpu=AMD64,AARCH64
type=UEFI Virtio blk version=22.4.10 cpu=AMD64,AARCH64
type=UEFI version=14.28.16 cpu=AMD64,AARCH64
type=PXE version=3.6.805 cpu=AMD64
Description: UID GuidsNumber
Base GUID: b8cef60300677860 14
Base MAC: b8cef6677860 14
Image VSD: N/A
Device VSD: N/A
PSID: MT_0000000540
Security Attributes: N/A
root@localhost:/opt/mellanox/doca/samples/doca_flow/flow_hairpin#

-J

Hi @IamAries, I check the sample with your options and It works on my environment.

let me describe how I setup env step by step:

  1. /opt/mellanox/dpdk/bin/dpdk-hugepages.py --pagesize 2M --setup 4G
  2. ovs configuration:
    ovs.log (2.1 KB)
  3. app log
    flow_hairpin.log (41.2 KB)
1 Like

Also, you should perform power cycle because

FW Version: 24.37.1300
FW Version(Running): 24.35.2000

Hey Yavtuk -

Thank you for taking the time to run the sample and send your logs. It turns out issue was I did not specify the proper sub-function in the command line arg. Your log showed it right away. I now have things working properly.

Thanks again
-J

Hey Yavtuk -

Well, seems I spoke too soon. If you don’t mind, I could use a little more assistance. I can now get the samples to run without errors but I just cannot see it actually work!

The test scenario I have is running a ping on the network segment connected to port 1. I run either the flow_hairpin or flow_drop samples which run with no error but do not seem to have any effect.

Running tcpdump on the host machine on my ens2f1 port I can see the ping traffic coming in. Looking at the ovs switch flows I can see the packet counters increasing. It would seem the DPU is passing the traffic through ovs and up to the host – ok. I then run the flow_drop sample application. I would expect, at least, the traffic going up to the host to stop as presumably the flow_drop app should be passing all traffic now to the second port on the DPU (and not up to the host). This does not happen.

Looking at the code for this sample it appears that its intent is to pass all traffic between the ports except for the traffic that it attempts to block (appears to be traffic destined to 8.8.8.8 on port 80). Not seeing traffic emanating from my port 0, and at the same time seeing the traffic continue to appear at host would seem to suggest something isn’t working.

I also note that my starting condition does not appear to matching yours. Your log file output seems to snow no sf’s existing before you added two for use by the sample (your output of “/opt/mellanox/iproute2/sbin/mlxdevm port show” was blank). On my system there are already two existing sf’s that I did not create. I am not sure what accounts for this difference. I just did a reinstall of the SDK & re-flash of the card to attempt to clear out anything that may have been pre-existing somehow but result is still the same.

Here is what my system looks like:

/opt/mellanox/iproute2/sbin/mlxdevm port show
pci/0000:03:00.0/229376: type eth netdev en3f0pf0sf0 flavour pcisf controller 0 pfnum 0 sfnum 0
function:
hw_addr 02:f4:65:74:e3:69 state active opstate attached roce true max_uc_macs 128 trust off
pci/0000:03:00.0/229377: type eth netdev en3f0pf0sf4 flavour pcisf controller 0 pfnum 0 sfnum 4
function:
hw_addr 00:00:00:00:10:00 state active opstate attached roce true max_uc_macs 128 trust on
pci/0000:03:00.1/294912: type eth netdev en3f1pf1sf0 flavour pcisf controller 0 pfnum 1 sfnum 0
function:
hw_addr 02:f5:45:34:a0:38 state active opstate attached roce true max_uc_macs 128 trust off
pci/0000:03:00.1/294913: type eth netdev en3f1pf1sf5 flavour pcisf controller 0 pfnum 1 sfnum 5
function:
hw_addr 00:00:00:00:20:00 state active opstate attached roce true max_uc_macs 128 trust on

(sf4 and sf5 are the ones I added. the two sf0’s are the ones that were already existing)

And my command line to run the flow_drop example:

./build/doca_flow_drop -a auxiliary:mlx5_core.sf.5,dv_flow_en=2 -a auxiliary:mlx5_core.sf.4,dv_flow_en=2 – -l 60

And to run the hairpin:

./build/doca_flow_hairpin -a auxiliary:mlx5_core.sf.5,dv_flow_en=2 -a auxiliary:mlx5_core.sf.4,dv_flow_en=2 – -l 60

Might you have any thoughts that might help me here?

Thanks again for your time
-J
config.log (2.7 KB)
flow_drop.log (25.9 KB)

Hi @IamAries, I don’t have much experience with openvswitch.
It’s hard to say but I can try to find time to check it locally.

Can you share me the rules for ovs bridges?

sudo ovs-ofctl dump-flows ovsbr1
sudo ovs-ofctl dump-flows ovsbr2

BTW, you can delete SFs manually
/opt/mellanox/iproute2/sbin/mlxdevm port del pci/0000:03:00.0/xxxxxx

based on config log, it seems wrong ovs configuration

b8a82ad1-cabf-4a6f-829b-2b1c05c5e06b
Port en3f1pf1sf0
Interface en3f1pf1sf0

    Port en3f0pf0sf0
        Interface en3f0pf0sf0

this ports aren’t trusted in your setup

pci/0000:03:00.0/229376: type eth netdev en3f0pf0sf0 flavour pcisf controller 0 pfnum 0 sfnum 0
function:
hw_addr 02:f4:65:74:e3:69 state active opstate attached roce true max_uc_macs 128 trust off

Here are the OVS flows:

root@localhost:/opt/mellanox/doca/samples/doca_flow/flow_drop# ovs-ofctl dump-flows ovsbr1
cookie=0x0, duration=22826.342s, table=0, n_packets=380, n_bytes=118022, priority=0 actions=NORMAL
root@localhost:/opt/mellanox/doca/samples/doca_flow/flow_drop# ovs-ofctl dump-flows ovsbr2
cookie=0x0, duration=22829.571s, table=0, n_packets=15970, n_bytes=1427430, priority=0 actions=NORMAL
root@localhost:/opt/mellanox/doca/samples/doca_flow/flow_drop#

My assumption had been, though, that if traffic was being hairpin in the DPU that it would never come up to OVS because this is offloaded, is it not so?

The untrusted SF’s are the first two that were not created by me. I did not specify either of these in my CLI args so presumably the sample is not attempting to use these.

I will try to remove them shortly and see if it makes any difference.

If you do find time to try it on yours I would most appreciate it. Again, thank you for taking the time to help me get started here; I really appreciate it.

-J

I add some rules to openvswitch for ingress traffic:

ingress packet → p0 → ovsbr1 → en3f0pf0sf2 → app → en3f0pf0sf1 → ovsbr2 → pf0hpf → host
sudo ovs-ofctl add-flow ovsbr1 in_port=p0,actions=output: en3f0pf0sf2
sudo ovs-ofctl add-flow ovsbr2 in_port= en3f0pf0sf1,actions=output: pf0hpf

I can see the packets inside my app. You can try to add the same rules for P1

This appear to maybe be where my issue was.

Setting up rules similar to yours in ovs now seems to allow the examples to pass traffic in a way that I would expect.

What is curious is that ovs does not show the flows as hardware offloaded. Should it? Or is it implied that they are offloaded because we reference the representers?

Here are the output of dump-flow on my ovs – traffic is passing successfully, but how to verify that this traffic is actually staying in the chip, not coming into the ARM cores?

(this is a run of the flow_drop example… so all traffic should either drop in hardware, or pass to other physical port in hardware… nothing up to the ARM cores)

root@localhost:/home/ubuntu# ovs-appctl dpctl/dump-flows
recirc_id(0),in_port(5),eth(src=e8:eb:d3:8c:08:6a,dst=00:00:00:00:fe:00),eth_type(0x0800),ipv4(frag=no), packets:1518, bytes:127512, used:0.010s, actions:6
recirc_id(0),in_port(4),eth(src=02:f5:45:34:a0:38,dst=ff:ff:ff:ff:ff:ff),eth_type(0x0800),ipv4(frag=no), packets:0, bytes:0, used:9.440s, actions:2,3
recirc_id(0),in_port(4),eth(src=e8:eb:d3:8c:08:6a,dst=00:00:00:00:fe:00),eth_type(0x0800),ipv4(frag=no), packets:28, bytes:2352, used:0.010s, actions:2,3
root@localhost:/home/ubuntu#
root@localhost:/home/ubuntu# ovs-ofctl dump-flows br0
cookie=0x0, duration=164.480s, table=0, n_packets=1576, n_bytes=132398, in_port=p0 actions=output:en3f1pf1sf4
cookie=0x0, duration=1704.893s, table=0, n_packets=4251, n_bytes=356274, priority=0 actions=NORMAL
root@localhost:/home/ubuntu# ovs-ofctl dump-flows br1
cookie=0x0, duration=1460.190s, table=0, n_packets=9106, n_bytes=928786, in_port=p1 actions=output:en3f1pf1sf0
cookie=0x0, duration=1705.415s, table=0, n_packets=1423, n_bytes=126236, priority=0 actions=NORMAL
root@localhost:/home/ubuntu#

-J

"What is curious is that ovs does not show the flows as hardware offloaded. Should it? Or is it implied that they are offloaded because we reference the representers?

Here are the output of dump-flow on my ovs – traffic is passing successfully, but how to verify that this traffic is actually staying in the chip, not coming into the ARM cores?"

I am thinking about the same issue and I am going to investigate it. let me know if you find something about it

Okay, yes I will do so! Thanks for all of your help; it sure sped up my getting things going!

-J

You are very welcome 🙂

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.