[MCX515A-CCAT / MCX516A-CCAT] Can only generate 53Gb/s with 64B packets


I am currently struggling in getting more than 53Gb/s with 64B packets on both of the MCX515A-CCAT and MCX516A-CCAT adapter when running a DPDK app that generates and transmits packets. WIth 256B packets I can get 98Gb/s.

My question is, is this inherent limitation of these NICs (only reach 100Gb/s with larger packets)?

If not, which firmware/driver/DPDK/system configurations I should tune to get 100Gb/s with small packets?

My setup is as following:

  • CPU: E5-2697 v3 (14 cores, SMT disabled, CPU frequency fixed @ 2.6 GHz)
  • NIC: MCX515A-CCAT / MCX516A-CCAT (Using only one port for TX, installed on PCIe Gen3 x16)
  • DPDK: 19.05
  • RDMA-CORE: v28.0
  • Kernel: 5.3.0
  • OS: Ubuntu 18.04
  • Firmware: 16.26.1040

I measured the TX rate with DPDK’s testpmd:

$ ./testpmd -l 3-13 -n 4 -w 02:00.0 – -i --port-topology=chained --nb-ports=1 --rxq=10 --txq=10 --nb-cores=10 --burst=128 --rxd=512 --txd=512 --mbcache=512 --forward-mode=txonly

So 10 cores generating and transmits packets on 10 NIC queues.

You feedbacks will be much appreciated.




Please, be sure that you are running latest and greatest software and firmware components. Current Mellanox OFED version is v5.0 and there is also newer firmware available.

Check host tuning. For reference, here is the link for performance test results that include BIOS settings, DPDK setting, command line and other parameters


In the case if the issue still persist, please open a support case by writing e-mail to support@mellanox.com as your organization has a valid support contract with us.

Hi, Thanks a lot for the answer, the issue is resolved. Was able to get 98Gb/s with 64B packets after set pci maxReadRequest to 1024 and turn off NIC flow control.