TLDR:
- (/) full 100Gbps performance traffic-gen → DuT when RSS disabled
- (X) significant packet drop with RSS enabled for TCP packets
Testsetup:
- Intel Icelake based test setup
- NICs Mellanox ConnectX-6 DX Dual Port 100GbE QSFP56
- Driver name: mlx5_pci. Firmware-version: 22.31.1014
- DuT and TrafficGen using each 1x:
- Mellanox ConnectX-6 DX Dual Port 100GbE QSFP56
- all BIOS settings according to https://fast.dpdk.org/doc/perf/DPDK_21_08_Mellanox_NIC_performance_report.pdf
- all OS settings according to aformentioned dpdk doc https://fast.dpdk.org/doc/perf/DPDK_21_08_Mellanox_NIC_performance_report.pdf
- Trex v2.92 on Traffic Generator
- dpdk 21.08
- MLNX_OFED_LINUX-5.3-1.0.5.0-ubuntu20.04-x86_64
Issue:
-
running testpmd without rss support reaches full 100 Gbps throughput
-
OK (/)
- ./dpdk-testpmd -n 4 -l 4,6,8,10,12 -a 0000:4b:00.0 -a 0000:4b:00.1 – --forward-mode=mac --rxq=1 --txq=1 -a
-
enabling (symmetric) RSS reaches full 100 Gbps throughput for UDP traffic profiles
-
OK (/)
- ./dpdk-testpmd -n 4 -l 4,6,8,10,12 -a 0000:4b:00.0 -a 0000:4b:00.1 – --forward-mode=mac --rxq=4 --txq=4 --nb-cores=4 -i --rss-udp -a
-
using RSS-IP support drops performance > 50 % (/)
- generating UDP traffic reaches 100 Gbps
- diverse TCP traffic (e.g. /opt/trex/v2.92# ./t-rex-64 -f ./cap2/http_simple.yaml -c 8 -m 130000) causes rx-drops
- /dpdk-testpmd -n 4 -l 4,6,8,10,12 -a 0000:4b:00.0 -a 0000:4b:00.1 – --forward-mode=mac --rxq=4 --txq=4 --nb-cores=4 --numa -i --rss-ip -a
- via diagnosis using “show port xstats all” it appears that the missing portion of TCP-traffic is dicarded by port 1 , e.g. increasing rx_phy_discard_packets: 454499854
Questions:
- How to resolve the rx-drop issue for TCP-based traffic whilst using RSS?
- ( the issue is seen with and without the patchset below)
- What is the official way to enable symmetric RSS based on incoming 5-tuple?
- currently the following patchset is applied to dpdk codebase
- expected outcome: flow-stable , symmetric loadbalancing of 5-tuple flows to the same rx-queue. Total RX-queues number shall be in range 4 to 16
RELATED Issues:
issue might be related to. See the same RX-drop upon reaching 54Mpps ONLY when using TCP-traffic.
DPDK patchset:
{code}
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index f9185065af…7f6d29c16e 100644
— a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -1095,7 +1095,7 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, “forward-mode”))
set_pkt_forwarding_mode(optarg);
if (!strcmp(lgopts[opt_idx].name, “rss-ip”))
- rss_hf = RTE_ETH_RSS_IP;
- rss_hf = ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP | ETH_RSS_SCTP;
if (!strcmp(lgopts[opt_idx].name, “rss-udp”))
rss_hf = RTE_ETH_RSS_UDP;
if (!strcmp(lgopts[opt_idx].name, “rss-level-inner”))
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 55eb293cc0…846cb35a5f 100644
— a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -86,6 +86,20 @@
#define EXTMEM_HEAP_NAME “extmem”
#define EXTBUF_ZONE_SIZE RTE_PGSIZE_2M
-
uint8_t symmetric_rss_key[40] = {
-
0x6D, 0x5A, 0x6D, 0x5A,
-
0x6D, 0x5A, 0x6D, 0x5A,
-
0x6D, 0x5A, 0x6D, 0x5A,
-
0x6D, 0x5A, 0x6D, 0x5A,
-
0x6D, 0x5A, 0x6D, 0x5A,
-
0x6D, 0x5A, 0x6D, 0x5A,
-
0x6D, 0x5A, 0x6D, 0x5A,
-
0x6D, 0x5A, 0x6D, 0x5A,
-
0x6D, 0x5A, 0x6D, 0x5A,
-
0x6D, 0x5A, 0x6D, 0x5A
-
};
uint16_t verbose_level = 0; /**< Silent by default. */
int testpmd_logtype; /**< Log type for testpmd logs */
@@ -3757,7 +3771,8 @@ init_port_config(void)
return;
if (nb_rxq > 1) {
- port->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
-
port->dev_conf.rx_adv_conf.rss_conf.rss_key = symmetric_rss_key;
-
port->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 40;
port->dev_conf.rx_adv_conf.rss_conf.rss_hf =
rss_hf & port->dev_info.flow_type_rss_offloads;
} else {
{code}