Does MCX-312b (ConnectX-3 Pro) NOT support ibv_flow_spec_ipv4 rule?

For this NIC, ethtool reports it supports mlx4_flow_steering_ipv4 as below

ethtool --show-priv-flags ens2

Private flags for ens2:

blueflame : on

mlx4_flow_steering_ethernet_l2: on

mlx4_flow_steering_ipv4 : on

mlx4_flow_steering_tcp : on

mlx4_flow_steering_udp : on

qcn_disable_32_14_4_e : off

rx-copy : off

phv-bit : off

rx-fcs : off

rx-all : off

mlx4_rss_xor_hash_function : off

But with ibv_query_device, it’s reported NOT to support 'IBV_DEVICE_MANAGED_FLOW_STEERING.

So, does this NIC actually support flow_steering_ipv4 in programming? or just support MAC–> QP flow steering rule?

BTW, if ConnectX-3 Pro just support MAC–> QP flow steering rule, is it actually NOT able to support mcast receive acceleration

( kernel bypass) with libvma?

EXP_MANAGED_FLOW_STEERING flag is shown

Does this show it can support verbs programming APIs in ibv_exp_, such as ibv_exp_create_flow, and ALL those ibv_exp_ flow steering rules, such as ibv_exp_flow_spec_eth, ibv_exp_flow_spec_ipv4, ibv_exp_flow_spec_tcp_udp ?

I wrote a test program, the flow steering can work with only MAC->QP rule. If adding ipv4,tcp/udp → QP rules, the flow can NOT be

steered into the QP.

Besides, because recently I found libvma can actually support kernel pass acceleration for multicast, this show the ConnectX-3 pro support somewhat flow steering also.

So the question become - Is the ConnectX-3 Pro just support MAC->QP rule, or all ipv4,tcp/udp → QP rules?

Can a programmer for libvma give an definite answer?

@alkx, thank you very much!

I’v already been reading the code of libvma.

BTW, EXP_MANAGED_FLOW_STEERING flag means the support to ibv_exp_create_flow (and its series) APIs while MANAGED_FLOW_STEERING means the support to ibv_create_flow (and its series) APIs, right?

If yes, as the output from ‘ibv_devinfo -v’ before, there is EXP_MANAGED_FLOW_STEERING flag but NO MANAGED_FLOW_STEERING flag, then the card ConnectX-3 Pro only support ibv_exp_create_flow series APIs, right?

RAW QP (the one that used by VMA) does support 5tuple. As VMA is an OpenSource GitHub - Mellanox/libvma: Linux user space library for network socket acceleration based on RDMA compatible network adap… GitHub - Mellanox/libvma: Linux user space library for network socket acceleration based on RDMA compatible network adaptors

you can check the implementation there

As a startup point, take a look at these functions

prepare_flow_spec

ibv_create_flow

ring_simple::attach_flow

rfs::attach_flow

In the case if you have more specific questions regarding VMA, there is a discussion forum on Google Groups Redirecting to Google Groups

Hi,

What is the output of?

#cat /etc/release

#uname -a

#ofed_info -s

#ibv_devinfo -v

  1. OS: CentOS Linux release 7.1.1503

  2. ‘uname -a’ output: Linux testlab5 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

  3. ofed_info -s

MLNX_OFED_LINUX-3.4-1.0.0.0:

  1. ibv_devinfo -v

sudo ibv_devinfo -v

hca_id: mlx4_0

transport: InfiniBand (0)

fw_ver: 2.36.5150

node_guid: e41d:2d03:002b:dba0

sys_image_guid: e41d:2d03:002b:dba0

vendor_id: 0x02c9

vendor_part_id: 4103

hw_ver: 0x0

board_id: MT_1200111023

phys_port_cnt: 2

max_mr_size: 0xffffffffffffffff

page_size_cap: 0xfffffe00

max_qp: 392632

max_qp_wr: 16351

device_cap_flags: 0x005c9c66

BAD_PKEY_CNTR

BAD_QKEY_CNTR

CHANGE_PHY_PORT

UD_AV_PORT_ENFORCE

PORT_ACTIVE_EVENT

SYS_IMAGE_GUID

RC_RNR_NAK_GEN

XRC

Unknown flags: 0x004c8000

device_cap_exp_flags: 0x71221C1600000000

EXP_DEVICE_QPG

EXP_UD_RSS

EXP_MEM_WINDOW

EXP_MEM_MGT_EXTENSIONS

EXP_MW_TYPE_2B

EXP_CROSS_CHANNEL

EXP_MANAGED_FLOW_STEERING

EXP_MR_ALLOCATE

EXT_ATOMICS

EXP_VXLAN_SUPPORT

EXP_RX_CSUM_TCP_UDP_PKT

EXP_RX_CSUM_IP_PKT

max_sge: 32

max_sge_rd: 30

max_cq: 65408

max_cqe: 4194303

max_mr: 524032

max_pd: 32764

max_qp_rd_atom: 16

max_ee_rd_atom: 0

max_res_rd_atom: 6282112

max_qp_init_rd_atom: 128

max_ee_init_rd_atom: 0

atomic_cap: ATOMIC_HCA (1)

log atomic arg sizes (mask) 0x8

masked_log_atomic_arg_sizes (mask) 0x8

masked_log_atomic_arg_sizes_network_endianness (mask) 0x0

max fetch and add bit boundary 64

log max atomic inline 3

max_ee: 0

max_rdd: 0

max_mw: 0

max_raw_ipv6_qp: 0

max_raw_ethy_qp: 0

max_mcast_grp: 131072

max_mcast_qp_attach: 244

max_total_mcast_qp_attach: 31981568

max_ah: 2147483647

max_fmr: 0

max_srq: 65472

max_srq_wr: 16383

max_srq_sge: 31

max_pkeys: 128

local_ca_ack_delay: 15

hca_core_clock: 318

max_klm_list_size: 0

max_send_wqe_inline_klms: 0

max_umr_recursion_depth: 0

max_umr_stride_dimension: 0

general_odp_caps:

rc_odp_caps:

NO SUPPORT

uc_odp_caps:

NO SUPPORT

ud_odp_caps:

NO SUPPORT

dc_odp_caps:

NO SUPPORT

xrc_odp_caps:

NO SUPPORT

raw_eth_odp_caps:

NO SUPPORT

max_dct: 0

max_device_ctx: 1016

Multi-Packet RQ is not supported

rx_pad_end_addr_align: 0

tso_caps:

max_tso: 0

packet_pacing_caps:

qp_rate_limit_min: 0kbps

qp_rate_limit_max: 0kbps

Device ports:

port: 1

state: PORT_ACTIVE (4)

max_mtu: 4096 (5)

active_mtu: 1024 (3)

sm_lid: 0

port_lid: 0

port_lmc: 0x00

link_layer: Ethernet

max_msg_sz: 0x40000000

port_cap_flags: 0x0c010000

max_vl_num: 2 (2)

bad_pkey_cntr: 0x0

qkey_viol_cntr: 0x0

sm_sl: 0

pkey_tbl_len: 1

gid_tbl_len: 128

subnet_timeout: 0

init_type_reply: 0

active_width: 1X (1)

active_speed: 10.0 Gbps (4)

phys_state: LINK_UP (5)

GID[ 0]: fe80:0000:0000:0000:e61d:2dff:fe2b:dba0

GID[ 1]: 0000:0000:0000:0000:0000:ffff:0a01:c817

port: 2

state: PORT_ACTIVE (4)

max_mtu: 4096 (5)

active_mtu: 1024 (3)

sm_lid: 0

port_lid: 0

port_lmc: 0x00

link_layer: Ethernet

max_msg_sz: 0x40000000

port_cap_flags: 0x0c010000

max_vl_num: 2 (2)

bad_pkey_cntr: 0x0

qkey_viol_cntr: 0x0

sm_sl: 0

pkey_tbl_len: 1

gid_tbl_len: 128

subnet_timeout: 0

init_type_reply: 0

active_width: 1X (1)

active_speed: 10.0 Gbps (4)

phys_state: LINK_UP (5)

GID[ 0]: fe80:0000:0000:0000:e61d:2dff:fe2b:dba1

GID[ 1]: 0000:0000:0000:0000:0000:ffff:0a01:c90d

Hi,

Device does report flow steering support. EXP_MANAGED_FLOW_STEERING flag is shown in the output. For additional details,please refer to Mellanox OFED User Manual - ‘Flow Steering Support’.

ibv_exp_* is a way to go if support for the latest features, like flow steering, is necessary.