RX ring parameter

Hello,

I am working on trying to obtain near line rate for a ConnectX-3 adapter (MCX354A). I am also trying to minimize or eliminate packet loss for high-throughput UDP transfers. One parameter that I have found makes a large difference is the setting for the receive ring buffer size. The default value was 1024, which I have increased to 8192:

ethtool -g eth2

Ring parameters for eth2:

Pre-set maximums:

RX: 8192

RX Mini: 0

RX Jumbo: 0

TX: 8192

Current hardware settings:

RX: 8192

RX Mini: 0

RX Jumbo: 0

TX: 8192

To what exactly does that number correspond? Is it a number of 4KB memory buffers (for 4KB pages)? Does each entry correspond to the current MTU? Basically, I am trying to determine how many packets of a given size can fit in the receive ring.

Regards,

Thomas

My understanding is the RX ring is the number of ethernet frames that the NIC can store in its internal buffers. An ethernet frame can be any size up to MTU. In your case the NIC can store 8192 frames (of different sizes) before it starts dropping new frames (unless flow-control is enabled on NIC and the switch port. Flowcontrol is used to pause the tx on the switchport until the NIC’s rx ring can accept more packets). If the kernel cannot copy frames fast enough from the rx ring to the kernel stack, then the rx ring can fill up and packets will be discarded.

There’s another param, dev_weight, described in this doc that might help with frame loss:``

https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/s-network-common-queue-issues.html#s-network-commonque-nichwbuf https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/s-network-common-queue-issues.html#s-network-commonque-nichwbuf

Thanks, that’s helpful. I will test a few different values for dev_weight to see how much difference it makes for my use case.

Does this parameter make a difference if you are using VMA?