Originally published at: https://developer.nvidia.com/blog/developing-applications-with-nvidia-bluefield-dpu-and-dpdk/
Developing and application and offloading it in two ways, via DPDK and via DOCA libraries, to run on the BlueField Data Processing Unit
Hi,
Is it possible to program DPU using rte_flow from a dpdk application running on x86 VM ? I can offload flows with action type RSS (steering to another queue) but can’t able to program transfer rules like if packet ingress on vf portX , match five tuple and egress to vf port Y (used RTE_FLOW_ACTION_TYPE_PORT_ID ). I have tried with dpdk 19.11 and 20.11, ofed 5.4 from my vm and my bf2 is with embedded switch mode ( also tried in separated mode in connectx6 as well). It returns failure with no error message and error code unspecified. Is it always required to use representator ports from dpu and dpdk program to be run on arm to offload such flow rules? Or am I missing some configuration here ?
Thanks
Subhajit
By default the DPU Arm controls the hardware accelerators (this is the embedded mode that you are referring to). And typically the control plane is offloaded to the Arm.
However you can still run your control plane on the X86-host and access the hardware accelerators in one of two ways -
- Using the DOCA Flow gRPC library - gRPC Infrastructure :: NVIDIA DOCA SDK Documentation
- [OR] By using the NIC mode - Modes of Operation :: NVIDIA DOCA SDK Documentation
The NIC mode is similar to the separated mode (in that the host can configure to the embedded switch) so I am not sure what is missing. I will suggest installing the DOCA 1.3 metapackage on your host and DPU and trying with those DPDK libraries. And maybe also try the NIC mode (instead of the separated mode).
Thanks. I have not tried with NIC mode in BF2. However I checked with connectx6DX as well, where there is no internal cpu model as such. In both the cards I tried below things
- Ports are in legacy mode (w/o rep ports) , run testpmd inside the vm and program a steering rule which uses port_id in action. Caught PMD error type 1 (cause unspecified): (no stated reason): Operation not supported
- Then configured switchdev mode in the host to get representor ports and attached vfs to vm and rep ports were present in the host. Then I tried to run testpmd in the host using -w “<pf’s pci address>,represntor=[0-1]”. Anyway testpmd was not able probe that mlx5 device. (Arguments were parsed correctly, also it was able probe other vfs individually, but not pf and rep port)
I will try nic mode and grpc in bf2 , not sure how to offload in connectx6DX.
Another conceptual doubt regarding steering rule,
If I run the control plane in the arm, it offloads flows similar to what ovs does. The direction(ingress, egress ) is opposite w.r.t the vm application as control app using rep port. Earlier my vm was having two vfs, once it gets traffic on vf1, it inspects or matches five tuple and write it back to vf2. Now, I want to offload this rule via arm control plane. The flow rules are programmed via rep port from arm, so I think I should put an egress rule on port 0 with transfer attribute , 5 tuple item match and action to port id 1. (Port 0 and 1 are dpdk port ids of rep port pf0vf1 and pf0vf2). Here doubt is the action to port id 1, which will again redirect traffic to vm(??). What action should be used here ? Should passthru or jump be used here? Because what vm was doing earlier is writing packets to vf2 and vf2 internally outs packets via pf, without any loop.