How to improve the performance of Flow Steering on ConnectX-5?

On one of our tests, we realized that using flow steering was significantly reducing the performance of our application. We have been able to confirm that with a simple benchmarking tool that simply counts packets. We are testing with an MCX516A-CCA_Ax NIC.

In our test, we are sending 100Mpps to our test machine. Without any flow rules, we are able to receive 100Mpps. But when we set up flow steering and flow isolation, we only get about 54Mpps. In this case, we simply set up a flow matching all UDP traffic with an RSS action with 16 queues.

Looking at the statistics, we can see rx_discards_phy for the missing packets, apparently indicating a hardware bottleneck.

Until now, our application was never fast enough to witness this bottleneck. And we operated under the assumption that flow steering would be able to reach line rate. Apparently, we may have been wrong.

So here are my questions:

  • Is these limitations reasonable or should we be able to receive 100Mpps even with flow steering rules?

  • (if not) Is there any DPDK or Mellanox configuration that we should use to speed up flow steering?

We are using DPDK 19.11 and RHEL 7.9 drivers. I upgraded the firmware to the latest version to no avail and I tried switching to OFED drivers, again to no avail.

Thank you very much for any help that you could provide

Hi,

Please, refer to this link http://fast.dpdk.org/doc/perf/DPDK_21_05_Mellanox_NIC_performance_report.pdf as for tuning and for performance results. You should be getting 148 pps and not 100 with 64b messages. If 100 is maximum of what you are receiving, it might be a tuning or application issue. After reaching the numbers from document, you might use testpmd (tested and verified) to utilize steering rules.

As a side not, please use latest MOFED v5.4 and firmware and latest stable DPDK in order to be sure that you are using software with latest fixes

If the issue is still happening, please put more details about the setup, command lines, a way flow steering is configured, traffic pattern, your host configuration. Be sure to use testpmd as we cannot troubleshoot custom code.

We should be able to run basic troubleshooting on this forum, however if the issue require more serious debugging like running debug tools, collecting logs, executing different tests, that can be done only if your organization has a software support contract with Nvidia.

Hi Aleksey

The 100Mpps is not the maximum, it’s just one test we are doing. But we should also get 100Mpps with flow steering on. I see nothing related to flow steering or rte flow rules in the performance reports.

I have tested with recent OFED drivers as well, but it made no difference. We can’t test with recent versions of DPDK right now, that’s a ton of work for us to port.

But maybe a more direct question: Should we be able to reach line rate results with RTE flow rules?

Hey @Baptiste Wicht​ ,

we are seeing a comparable issue however only on TCP based traffic. UDP traffic reaches line-rate.

https://community.mellanox.com/s/question/0D51T000097lyCaSAI/connectx6-dx-packet-drop-when-enabling-rss-rxqueues

Have you had a chance to further debug the issue?