I am currently investigating a weird performance issue with MPRQ. I have tested both DPDK 21.11 and 23.11, and it seems they both see the same issue. This is with a 100G ConnectX-6 NIC.
We test three scenarios:
We don’t get any drops when receiving 80Mpps of small V4 packets
Same when receiving 80Mpps of small V6 packets (slightly larger than V4, crossing the 64 barrier).
When receiving 40Mpps V4 and 40Mpps V6, we drop heavily (more than 50Mpps drops).
Without MPRQ, there are no drops in any of these 3 scenarios.
We are using MPRQ with a log stride size of 7 and a log stride num of 11. I have tested a few combinations and did not see significant differences.
Is that something normal? Is MPRQ only optimized for the case where we always receive the same packet size?
Interestingly, we had to significantly shift our configuration to a lot stride num of 3 and a log stride size of 10, so fewer strides, but larger. With that, we are still line rate at 100G and we don’t see the issue with mixed traffic.
I still think it’s impressive that a small shift in traffic would cause such high differences in drops.
Is there somewhere complete documentation and examples of how MPRQ works? I find the DPDK documentation to be lacking.