Line rate using Connect_X5 100G EN in Ubuntu; PCIe speed difference;


I am trying to get line rate with two machines (A and B) connected back to back using Connect_X5 EN 100G NICs.

Machine A (Transmitting pkts)

Run DPDK pktgen: sudo ./app/x86_64-native-linuxapp-gcc/pktgen -l 0-5 -n 3 -w 04:00.0 – -T -P -m “[1:2-5].0”

Machine B (Receiving pkts)

Run DPDK pktgen: sudo ./app/x86_64-native-linuxapp-gcc/pktgen -l 0-5 -n 3 -w 04:00.0 – -T -P -m “[1-4:5].0”

Machine A is sending packets @ 52G

Machine B is receiving packets @ 16G

Here are some more information.

Dell R620 with Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz (6 cores)

Only one socket is there.

HT is disabled.


DPDK 18.08

Ubuntu 16.04 LTS

MLNX_OFED_LINUX-4.4- was installed

My questions:

Q1. Why is machine B not able to receive more than 16G packets?

Q2. PCIE capacity and status speeds are diffrent.

root:~$ sudo lspci -s 04:00.0 -vvv | grep Width

LnkCap: Port #0, Speed 16GT/s, Width x16, ASPM not supported, Exit Latency L0s unlimited, L1 unlimited

LnkSta: Speed 8GT/s, Width x16, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

As you see, speed available is 16GT/s, but only 8GT/s is used. How can I increase this? The card is installed in SLOT2_G2_X16(CPU1).




Q1. Have you tried different benchmark like iperf?

Q2. PCIe 3.0 speed is 8GT/s, and PCIe 4.0 speed is 16GT/s, you have to get CPU/motherboard that supports PCIe 4.0 and AFAIK there is no such thing for Intel CPUs yet.

Can I get some expert advice on this if possible?


I am not familiar with pktgen and thus I may miss the point. But what really catches my eyes is your 52G throughput on the TX side. That’s exactly the same rate I saw when I ran FIO test with ConnectX-5 and I am puzzled why it doesn’t come close to the line rate (see Bad RoCEv2 throughput with ConnectX-5 ). I am using the stock Linux driver instead of Mellanox OFED.

Throughput on PCIe 3.0 at a x16 lane is over 100Gb/s.

The card itself supports PCIe 4.0.

If you do the BIOS tuning (Performance Tuning for Mellanox Adapters , BIOS Performance Tuning Example ) and server tuning (Understanding PCIe Configuration for Maximum Performance , Linux sysctl Tuning ) you can hit line rate. Since you’ve got an EN card; update the firmware to the newest, and set a high MTU of 9000.

Before putting the post, I have adhered to Perf. Tuning fo Mellanox Adapters, BIOS tuning and Linux sysctl Tuning.

What I couldn’t understand is why I am not able to get 16GT/s speed in the PCIe - but only able to use 8GT/s. Any help/pointers would be highly appreciated.

I feel the queues in the receiving machine are getting full or something, not sure how I can tune them. I say this because, ethtool suggests there is no loss in packets.

You have to get a motherboard that supports PCIe 4.0 to get 16GT/s (and that’s about 250Gbps at x16)

The OP should get at least 100G even with PCIe 3.

Thanks for the response Martin.

For more information regarding regarding the PCI specification, please see the following link → PCI Express - Wikipedia

For more information regarding performance tuning for mlx5 for DPDK. please see the following link → PCI Express - Wikipedia

Both links go to PCI Express on Wikipedia. Did you mean this link ?

The next link contains performance tuning recommendation for a Dell PowerEdge R730 but some of them are also applicable for the R620.

There was no next link, did you miss putting any link or meant this one ?



Hi Arvind

Many thanks for posting your question on the Mellanox Community.

Based on the information provided and also some of the already correct answers, PCIe3.0 has a speed of 8GT and based on the system board configuration, x16 and x8 capabilities. Some of our adapters already have PCIe 4.0 capabilities which have a speed of 16GT.

Many thanks.

~Mellanox Technical Support

Hi Arvind,

I corrected my earlier answer with the correct links and added an extra link regarding to our NIC’s Performance with DPDK 18.02.

Many thanks.

~Mellanox Technical Support