Hi, I’m configuring an UPF application using DPDK now.
I want to filter GTP-U packets by Inner Source IP (in case of me, its sent from packet generator TRex, varying from 239.0.238.1 to 239.0.238.4) on BF3 NIC, by hardware offload. I set FLEX_PARSER_PROFILE_ENABLE value to 3, and checked again after reboot. Then I made a flow rule, using rte_flow API, which is directing packets to queues connected Worker Core according to the value of last octet from Inner source IP. For example, if the packet’s last octet is 1 then forward it to queue 0, and for 2, forward to queue 1…, and so on. I tried various patterns: ETH | IPV4 | UDP | GTPU | IPV4 | UDP | END, ETH | IPV4 | UDP, ETH | IPV4 | UDP | GTPU. In case of ETH | IPV4 | UDP, the flow creation prints out success, even if application of the rule during packet transmission test failed: it goes to only randomly selected one queue. And for the others,
=== GTP-U TEID Flow Rule Debug Info ===
Port: 1, TEID: 0x080004f1, Queue: 0
Expected GTP-U TEID pattern: 0xf1040008 (mask: 0xffffffff)
Flow validation failed: item not supported (type=13)
=== GTP-U TEID Flow Rule Debug Info ===
Port: 1, TEID: 0x080004f2, Queue: 1
Expected GTP-U TEID pattern: 0xf2040008 (mask: 0xffffffff)
Flow validation failed: item not supported (type=13)
=== GTP-U TEID Flow Rule Debug Info ===
Port: 1, TEID: 0x080004f3, Queue: 2
Expected GTP-U TEID pattern: 0xf3040008 (mask: 0xffffffff)
Flow validation failed: item not supported (type=13)
=== GTP-U TEID Flow Rule Debug Info ===
Port: 1, TEID: 0x080004f4, Queue: 3
Expected GTP-U TEID pattern: 0xf4040008 (mask: 0xffffffff)
Flow validation failed: item not supported (type=13)
NIC Initialization success.
these message comes out, and flow creation is failed.
I tried both on DPDK 22.11 and DPDK 25.03, and
root@localhost:/opt/mndu# mlxconfig -d 0000:03:00.1 query FLEX_PARSER_PROFILE_ENABLE
Device #1:
Device type: BlueField3
Name: 900-9D3B6-00CV-A_Ax
Description: NVIDIA BlueField-3 B3220 P-Series FHHL DPU; 200GbE (default mode) / NDR200 IB; Dual-port QSFP112; PCIe Gen5.0 x16 with x16 PCIe extension option; 16 Arm cores; 32GB on-board DDR; integrated BMC; Crypto Enabled
Device: 0000:03:00.1
Configurations: Next Boot
FLEX_PARSER_PROFILE_ENABLE 3
root@localhost:/opt/mndu# lspci | grep Ethernet
03:00.0 Ethernet controller: Mellanox Technologies MT43244 BlueField-3 integrated ConnectX-7 network controller (rev 01)
03:00.1 Ethernet controller: Mellanox Technologies MT43244 BlueField-3 integrated ConnectX-7 network controller (rev 01)
my device is this.
root@localhost:/opt/mndu# modinfo mlx5_core | grep ^version:
version: 23.07-0.5.0
root@localhost:/opt/mndu# dpkg -l | grep rdma-core
ii python3-pyverbs:arm64 2307mlnx47-1.2307050 arm64 Python bindings for rdma-core
ii rdma-core 2307mlnx47-1.2307050 arm64 RDMA core userspace infrastructure and documentation
and these are kernel module and rdma-core version.
root@localhost:/opt/mndu# mlxfwmanager
Querying Mellanox devices firmware …
Device #1:
Device Type: BlueField3
Part Number: 900-9D3B6-00CV-A_Ax
Description: NVIDIA BlueField-3 B3220 P-Series FHHL DPU; 200GbE (default mode) / NDR200 IB; Dual-port QSFP112; PCIe Gen5.0 x16 with x16 PCIe extension option; 16 Arm cores; 32GB on-board DDR; integrated BMC; Crypto Enabled
PSID: MT_0000000884
PCI Device Name: /dev/mst/mt41692_pciconf0
Base MAC: a088c20b5a16
Versions: Current Available
FW 32.39.4082 N/A
PXE 3.7.0300 N/A
UEFI 14.33.0012 N/A
UEFI Virtio blk 22.4.0014 N/A
UEFI Virtio net 21.4.0013 N/A
Status: No matching image found
Finally, These is current FW version.
- My first question is that, should I upgrade kernel module, and rdma-core to newer version to figure out whether tunnel_stateless_gtp is enabled? Is this necessary for my goal to filter packets by Inner source IP?
- Do I need to upgrade BF3 FW to enable GTP packets filtering and Inner source IP filtering?
- Are there any additional conditions or requirements to perform this? If there are any missings, could you let me know?
Regards,
Sunghyun Jang
SNETICT R&D employee in South Korea