ConnectX-6 DX - packet drop when enabling RSS / rxqueues

TLDR:

  • (/) full 100Gbps performance traffic-gen → DuT when RSS disabled
  • (X) significant packet drop with RSS enabled for TCP packets

Testsetup:

Issue:

  • running testpmd without rss support reaches full 100 Gbps throughput

  • OK (/)

    • ./dpdk-testpmd -n 4 -l 4,6,8,10,12 -a 0000:4b:00.0 -a 0000:4b:00.1 – --forward-mode=mac --rxq=1 --txq=1 -a
  • enabling (symmetric) RSS reaches full 100 Gbps throughput for UDP traffic profiles

  • OK (/)

    • ./dpdk-testpmd -n 4 -l 4,6,8,10,12 -a 0000:4b:00.0 -a 0000:4b:00.1 – --forward-mode=mac --rxq=4 --txq=4 --nb-cores=4 -i --rss-udp -a
  • using RSS-IP support drops performance > 50 % (/)

    • generating UDP traffic reaches 100 Gbps
    • diverse TCP traffic (e.g. /opt/trex/v2.92# ./t-rex-64 -f ./cap2/http_simple.yaml -c 8 -m 130000) causes rx-drops
    • /dpdk-testpmd -n 4 -l 4,6,8,10,12 -a 0000:4b:00.0 -a 0000:4b:00.1 – --forward-mode=mac --rxq=4 --txq=4 --nb-cores=4 --numa -i --rss-ip -a
    • via diagnosis using “show port xstats all” it appears that the missing portion of TCP-traffic is dicarded by port 1 , e.g. increasing rx_phy_discard_packets: 454499854

Questions:

  • How to resolve the rx-drop issue for TCP-based traffic whilst using RSS?
    • ( the issue is seen with and without the patchset below)
  • What is the official way to enable symmetric RSS based on incoming 5-tuple?
    • currently the following patchset is applied to dpdk codebase
    • expected outcome: flow-stable , symmetric loadbalancing of 5-tuple flows to the same rx-queue. Total RX-queues number shall be in range 4 to 16

RELATED Issues:

issue might be related to. See the same RX-drop upon reaching 54Mpps ONLY when using TCP-traffic.

https://mymellanox.force.com/mellanoxcommunity/s/question/0D51T00008yTVgjSAG/how-to-improve-the-performance-of-flow-steering-on-connectx5

DPDK patchset:

{code}

diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c

index f9185065af…7f6d29c16e 100644

— a/app/test-pmd/parameters.c

+++ b/app/test-pmd/parameters.c

@@ -1095,7 +1095,7 @@ launch_args_parse(int argc, char** argv)

if (!strcmp(lgopts[opt_idx].name, “forward-mode”))

set_pkt_forwarding_mode(optarg);

if (!strcmp(lgopts[opt_idx].name, “rss-ip”))

  • rss_hf = RTE_ETH_RSS_IP;
  • rss_hf = ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP | ETH_RSS_SCTP;

if (!strcmp(lgopts[opt_idx].name, “rss-udp”))

rss_hf = RTE_ETH_RSS_UDP;

if (!strcmp(lgopts[opt_idx].name, “rss-level-inner”))

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c

index 55eb293cc0…846cb35a5f 100644

— a/app/test-pmd/testpmd.c

+++ b/app/test-pmd/testpmd.c

@@ -86,6 +86,20 @@

#define EXTMEM_HEAP_NAME “extmem”

#define EXTBUF_ZONE_SIZE RTE_PGSIZE_2M

  • uint8_t symmetric_rss_key[40] = {

  • 0x6D, 0x5A, 0x6D, 0x5A,

  • 0x6D, 0x5A, 0x6D, 0x5A,

  • 0x6D, 0x5A, 0x6D, 0x5A,

  • 0x6D, 0x5A, 0x6D, 0x5A,

  • 0x6D, 0x5A, 0x6D, 0x5A,

  • 0x6D, 0x5A, 0x6D, 0x5A,

  • 0x6D, 0x5A, 0x6D, 0x5A,

  • 0x6D, 0x5A, 0x6D, 0x5A,

  • 0x6D, 0x5A, 0x6D, 0x5A,

  • 0x6D, 0x5A, 0x6D, 0x5A

  • };

uint16_t verbose_level = 0; /**< Silent by default. */

int testpmd_logtype; /**< Log type for testpmd logs */

@@ -3757,7 +3771,8 @@ init_port_config(void)

return;

if (nb_rxq > 1) {

  • port->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
  • port->dev_conf.rx_adv_conf.rss_conf.rss_key = symmetric_rss_key;

  • port->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 40;

port->dev_conf.rx_adv_conf.rss_conf.rss_hf =

rss_hf & port->dev_info.flow_type_rss_offloads;

} else {

{code}

Hi Tobias,

As you know,

’ --rss-udp’ is NOT only for UDP, include ipv4/ipv6 and UDP.

and ‘–rss-ip’ is only for ipv4/ipv6.

About rx-drop issue, maybe first check if the RSS works or not.

Use ‘show port xstat’ to check if all queues receive packets.

If not, check the receiving packet attributes.

If the 5 tuples of all packets are the same, the RSS won’t work.

Regards,

Levei

Hi @Levei Luo​ ,

please check my initial post again.

In my case --rss-ip has been customized and adds

rss_hf = ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP | ETH_RSS_SCTP;

instead of initial

rss_hf = RTE_ETH_RSS_IP;. Just imagine we added a new cli-flag rss-tcp-udp-sctp !

Also with regards to your ideas:

  • If the 5 tuples of all packets are the same, the RSS won’t work.
    • all flows differ with different 5-tuples!
  • ‘show port xstat’
    • I can clearly see it working in case of rss-udp for UDP-only traffic
    • in case of the customized rss-ip aka rss-tcp-udp-sctp.
      • xstats show a high amount of rx_phy_discard_packets for all TCP-packets:

Please involve 3rd-level support here and provide a working configuration for enabled symmetric rss.

Any other card vendor can easily run symmetric rss on 100 Gbps onwards.

What am I missing here? Thanks

Hi @Levei Luo​ , any updates? Please check my previous post.

Hey Tobias R (Partner)

I have a similar problem.

https://community.mellanox.com/s/question/0D51T00009Ho5YQSAZ/connectx6-dpdk-dpdktestpmd-receive-error-len-error-checksum-udp-packet-performance-is-very-low

https://community.mellanox.com/s/question/0D51T00009Ho5YQSAZ/connectx6-dpdk-dpdktestpmd-receive-error-len-error-checksum-udp-packet-performance-is-very-low