How to configure packet pacing on ConnectX6 DX (a server running RHEL 8.8)?

Hi folks,

I’m trying to make an application which sends a packet traffic at line rate(100Gbps) making use of “Packet Pacing(tx_pp)” feature.
However, I met an error message while enabling RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP ‘rte_eth_dev_configure()’.
Before running my application, I followed an instruction on this page. (ESPCommunity)
Did I miss something ? or
Please let me know if there is a way of checking out whether the feature is enabled ?
I would appreciate if anyone could share an experience/gotcha/comments/suggestion/references while doing the same thing with me.
Kind regards,


My environment is
Linux OS :
"Linux 4.18.0-477.10.1.rt7.274.el8_8.x86_64 #1 SMP PREEMPT_RT Wed Apr 5 13:20:38 EDT 2023 x86_64 x86_64 x86_64 GNU/Linux

Mellanox Firmware version :
[root@MELB-TEST-D18 generator]# mlxfwmanager
Querying Mellanox devices firmware …

Device #1:

Device Type: ConnectX6DX
Part Number: 0F6FXM_08P2T2_Ax
Description: Mellanox ConnectX-6 Dx Dual Port 100 GbE QSFP56 Network Adapter
PSID: DEL0000000027
PCI Device Name: /dev/mst/mt4125_pciconf0
Base GUID: 946dae0300e22eda
Base MAC: 946daee22eda
Versions: Current Available
FW 22.36.1010 N/A
PXE 3.6.0901 N/A
UEFI 14.29.0014 N/A

Status: No matching image found

Full error messages are :

"EAL: Detected CPU lcores: 24
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode ‘PA’
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:16:00.0 (socket 0)
EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:16:00.1 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created

Device driver name in use: mlx5_pci…
Initializing port 0 with 8 RX queues and 8 TX queues…
Ethdev port_id=0 requested Tx offloads 0x200000 doesn’t match Tx offloads capabilities 0xd96af in rte_eth_dev_configure()
EAL: Error - exiting with code: 1
Cause: Cannot configure device: err=-22, port=0


It’s likely that these features conflict on the device. The packet pacing feature automatically schedule TX packets to be sent at calculated time, with the given rate, while the RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP flag in DPDK constrains sending time of the packets, so I think it doesn’t seem to make sense to use these two feature together. Perhaps you need to reconsider your application.

Hi Kirisame,

Thank you for your response.
I’m using DPDK at the moment and thought Packet Pacing was working on DPDK environment.
By the way, is there any sample code for Packet Pacing ?
I found this article and code, but having hard time in compilation.

Thank you,

I need to change my question.
How can I test 5t5g in a server without PTP synchronization?
Because the server doesn’t have a grand master in the network and ConnectX-6 DX is loopback between ports (port0 tx → port1 rx, port1 tx → port1 rx).
And the server is synchronized to a local ntp server via another 1Gbps ethernet interface.
I believe there is a way to test “Packet Pacing” and “Accurate Tx Scheduling”.
Here is the current parameters for ConnextX-6 DX interface.

sudo ethtool -T ens1f0np0
Time stamping parameters for ens1f0np0:
PTP Hardware Clock: 3
Hardware Transmit Timestamp Modes:
Hardware Receive Filter Modes:

Any comments are welcome.
Thank you,


Hi Kevin,

Thank you for posting your query on NVIDIA community and sharing additional details.

Unfortunately, based on the details shared, this will require a debug . Since you are using a Dell branded PSID(DEL0000000027), it will first need to go through Dell Support.

The FW parameter applicable is REAL_TIME_CLOCK_ENABLE and thus, it should be taken with OEM as OEM have control over the FW.


Thank you namrata1.
As per your advice, I’m contacting Dell.