ConnectX-6 VPI Socket Direct 2x with DPDK - supported by mlx5?


I am using ConnectX-6 VPI (MCX654106A-HCAT). It has 2x PCIe cards to deliver 200G on dual socket servers.

I have connected each of the PCIe into two different CPU sockets in my dual-socket server.

I am using DPDK’s rte_eth_rx_queue_setup() to configure port-queue-cpu mapping.

My question is, will making appropriate rte_eth_rx_queue_setup() calls ensure that packets avoid UPI path and prefer Mellanox’s Socket Direct feature?

Note, I am using mlx5 driver.

I have never used Socket Direct feature, hence any clarity on how to use it with DPDK (if at all its supported) would be super helpful.

Hello Arvind,

Many thanks for posting your question on the Mellanox Community.

Based on the information provided by, the ConnectX-6 SocketDirect VPI adapter is unfortunately not supported yet.

Support will be implemented in a later stage. Please check the website regularly for the latest versions available with the latest supported Mellanox adapters.

Many thanks,

~Mellanox Technical Support

is misleading. Under Mellanox OFED/EN, it says ConnectX-6: 20.99.5374 and above.

The very first line on that link also states and I quote

“The MLX5 poll mode driver library (librte_pmd_mlx5) provides support for Mellanox ConnectX-4, Mellanox ConnectX-4 Lx , Mellanox ConnectX-5, Mellanox ConnectX-6 and Mellanox BlueField families of 10/25/40/50/100/200 Gb/s adapters as well as their virtual functions (VF) in SR-IOV context.”

I am not sure what you mean its not yet supported. Even Mellanox ConnectX-6’s product sheet specifies it supports DPDK.

Q1. Are you still saying its not supported?

Q2. Could you please let me know how to use the Socket Direct feature with or without DPDK?

Request someone to please respond.


Hello Arvind,

We will respond through the support ticket you have opened.

Many thanks,

~Mellanox Technical Support