Configuring ConnectX-5 PFC without using VLAN

Hello, I am trying to enable PFC between two ConnectX-5 adapters without using VLAN.

I followed the [MLNX_OFED document].(https://docs.nvidia.com/networking/display/MLNXOFEDv23040533/Flow+Control#FlowControl-PFCConfigurationonHosts)

When I check prior[n]_rx_frames with ethtool -S <ehtX> only prio0 frames are transmitted. I set ToS using iperf -S 0x8 option.

It works well with VLAN configured, as written in the same documentation. Here mentions " 1. If the underlying device is not a VLAN device, the mapping is done in the driver". It seems not to working properly. Is there any more setting I should have done to map Socket Priority to UP?

This is my hostnamectl result.

Operating System: Ubuntu 22.04.2 LTS              
          Kernel: Linux 5.15.0-71-generic
    Architecture: x86-64
 Hardware Vendor: Supermicro
  Hardware Model: SYS-420GP-TNR

This is my ofed_info result.

MLNX_OFED_LINUX-23.04-0.5.3.3 (OFED-23.04-0.5.3):

clusterkit:
mlnx_ofed_clusterkit/clusterkit-1.8.430-1.src.rpm

dpcp:
/sw/release/sw_acceleration/dpcp/dpcp-1.1.39-1.src.rpm

hcoll:
mlnx_ofed_hcol/hcoll-4.8.3221-1.src.rpm

ibarr:
https://github.com/Mellanox/ip2gid master
commit 44ac1948d0d604c723bc36ade0af02c54e7fc7d2
ibdump:
https://github.com/Mellanox/ibdump master
commit d0a4f5aabf21580bee9ba956dfff755b1dd335c3
ibsim:
mlnx_ofed_ibsim/ibsim-0.12.tar.gz

ibutils2:
ibutils2/ibutils2-2.1.1-0.162.MLNX20230417.g738750f2.tar.gz

iser:
mlnx_ofed/mlnx-ofa_kernel-4.0.git mlnx_ofed_23_04
commit 8bc8aa25cb4afad671fd45ed3e04e17eaf39c853

isert:
mlnx_ofed/mlnx-ofa_kernel-4.0.git mlnx_ofed_23_04
commit 8bc8aa25cb4afad671fd45ed3e04e17eaf39c853

kernel-mft:
mlnx_ofed_mft/kernel-mft-4.24.0-72.src.rpm

knem:
knem.git mellanox-master
commit 63d4f292ab7e5a57475a6d280c5d8865a6156b7b
libvma:
vma/source_rpms//libvma-9.8.20-1.src.rpm

libxlio:
/sw/release/sw_acceleration/xlio/libxlio-3.0.2-1.src.rpm

mlnx-dpdk:
https://github.com/Mellanox/dpdk.org mlnx_dpdk_22.11_last_stable
commit 07b550e9eda94e45030254fb0701960520cae943
mlnx-en:
mlnx_ofed/mlnx-ofa_kernel-4.0.git mlnx_ofed_23_04
commit 8bc8aa25cb4afad671fd45ed3e04e17eaf39c853

mlnx-ethtool:
mlnx_ofed/ethtool.git mlnx_ofed_23_04
commit ae58de189f35dd91f61ec0595e7fbd169c381987
mlnx-iproute2:
mlnx_ofed/iproute2.git mlnx_ofed_23_04
commit a9d62d85973e580b180d74ff11874bf2b3d4e1c3
mlnx-nfsrdma:
mlnx_ofed/mlnx-ofa_kernel-4.0.git mlnx_ofed_23_04
commit 8bc8aa25cb4afad671fd45ed3e04e17eaf39c853

mlnx-nvme:
mlnx_ofed/mlnx-ofa_kernel-4.0.git mlnx_ofed_23_04
commit 8bc8aa25cb4afad671fd45ed3e04e17eaf39c853

mlnx-ofa_kernel:
mlnx_ofed/mlnx-ofa_kernel-4.0.git mlnx_ofed_23_04
commit 8bc8aa25cb4afad671fd45ed3e04e17eaf39c853

mlnx-tools:
https://github.com/Mellanox/mlnx-tools mlnx_ofed
commit 1cae4db9ec7807db1b346eca2a017022729c4f24
mlx-steering-dump:
https://github.com/Mellanox/mlx_steering_dump mlnx_ofed_23_04
commit fc616d9a8f62113b0da6fc5a8948b11177d8461e
mpitests:
mlnx_ofed_mpitest/mpitests-3.2.20-de56b6b.src.rpm

mstflint:
mlnx_ofed_mstflint/mstflint-4.16.1-2.tar.gz

multiperf:
http://l-gerrit.mtl.labs.mlnx:8080/Performance/multiperf rdma-core-support
commit d3fad92dc6984e43cc5377ba0a3126808432ce2d
ofed-docs:
docs.git mlnx_ofed-4.0
commit 3d1b0afb7bc190ae5f362223043f76b2b45971cc

openmpi:
mlnx_ofed_ompi_1.8/openmpi-4.1.5rc2-1.src.rpm

opensm:
mlnx_ofed_opensm/opensm-5.15.0.MLNX20230417.d84ecf64.tar.gz

openvswitch:
https://gitlab-master.nvidia.com/sdn/ovs mlnx_ofed_23_04
commit e054917e557b06eeeb0c328e316f0c2e404db426
perftest:
mlnx_ofed_perftest/perftest-23.04.0-0.23.g63e250f.tar.gz

rdma-core:
mlnx_ofed/rdma-core.git mlnx_ofed_23_04
commit 0c98842b6bf42227f639189a7cd6f3a5bd21e27b
rshim:
mlnx_ofed_soc/rshim-2.0.6-19.g0873acd.src.rpm

sharp:
mlnx_ofed_sharp/sharp-3.3.0.MLNX20230417.ec919ce9.tar.gz

sockperf:
sockperf/sockperf-3.10-0.git5ebd327da983.src.rpm

srp:
mlnx_ofed/mlnx-ofa_kernel-4.0.git mlnx_ofed_23_04
commit 8bc8aa25cb4afad671fd45ed3e04e17eaf39c853

ucx:
mlnx_ofed_ucx/ucx-1.15.0-1.src.rpm

xpmem:
xpmem.git mellanox-master
commit 41ce504ae323d9c8eb38abdf3949edc35070f454

Installed Packages:
-------------------
ii  dpcp                                       1.1.39-1.2304053                           amd64        Direct Packet Control Plane (DPCP) is a library to use Devx
ii  hcoll                                      4.8.3221-1.2304053                         amd64        Hierarchical collectives (HCOLL)
ii  ibacm                                      2304mlnx44-1.2304053                       amd64        InfiniBand Communication Manager Assistant (ACM)
ii  ibarr:amd64                                0.1.3-1.2304053                            amd64        Nvidia address and route userspace resolution services for Infiniband
ii  ibdump                                     6.0.0-1.2304053                            amd64        Mellanox packets sniffer tool
ii  ibsim                                      0.12-1.2304053                             amd64        InfiniBand fabric simulator for management
ii  ibsim-doc                                  0.12-1.2304053                             all          documentation for ibsim
ii  ibutils2                                   2.1.1-0.162.MLNX20230417.g738750f2.2304053 amd64        OpenIB Mellanox InfiniBand Diagnostic Tools
ii  ibverbs-providers:amd64                    2304mlnx44-1.2304053                       amd64        User space provider drivers for libibverbs
ii  ibverbs-utils                              2304mlnx44-1.2304053                       amd64        Examples for the libibverbs library
ii  infiniband-diags                           2304mlnx44-1.2304053                       amd64        InfiniBand diagnostic programs
ii  iser-dkms                                  23.04-OFED.23.04.0.5.3.1                   all          DKMS support fo iser kernel modules
ii  isert-dkms                                 23.04-OFED.23.04.0.5.3.1                   all          DKMS support fo isert kernel modules
ii  kernel-mft-dkms                            4.24.0-72                                  all          DKMS support for kernel-mft kernel modules
ii  knem                                       1.1.4.90mlnx2-OFED.23.04.0.5.2.1           amd64        userspace tools for the KNEM kernel module
ii  knem-dkms                                  1.1.4.90mlnx2-OFED.23.04.0.5.2.1           all          DKMS support for mlnx-ofed kernel modules
ii  libibmad-dev:amd64                         2304mlnx44-1.2304053                       amd64        Development files for libibmad
ii  libibmad5:amd64                            2304mlnx44-1.2304053                       amd64        Infiniband Management Datagram (MAD) library
ii  libibnetdisc5:amd64                        2304mlnx44-1.2304053                       amd64        InfiniBand diagnostics library
ii  libibumad-dev:amd64                        2304mlnx44-1.2304053                       amd64        Development files for libibumad
ii  libibumad3:amd64                           2304mlnx44-1.2304053                       amd64        InfiniBand Userspace Management Datagram (uMAD) library
ii  libibverbs-dev:amd64                       2304mlnx44-1.2304053                       amd64        Development files for the libibverbs library
ii  libibverbs1:amd64                          2304mlnx44-1.2304053                       amd64        Library for direct userspace use of RDMA (InfiniBand/iWARP)
ii  libibverbs1-dbg:amd64                      2304mlnx44-1.2304053                       amd64        Debug symbols for the libibverbs library
ii  libopensm                                  5.15.0.MLNX20230417.d84ecf64-0.1.2304053   amd64        Infiniband subnet manager libraries
ii  libopensm-devel                            5.15.0.MLNX20230417.d84ecf64-0.1.2304053   amd64        Development files for OpenSM
ii  librdmacm-dev:amd64                        2304mlnx44-1.2304053                       amd64        Development files for the librdmacm library
ii  librdmacm1:amd64                           2304mlnx44-1.2304053                       amd64        Library for managing RDMA connections
ii  mlnx-ethtool                               6.0-1.2304053                              amd64        This utility allows querying and changing settings such as speed,
ii  mlnx-iproute2                              6.2.0-1.2304053                            amd64        This utility allows querying and changing settings such as speed,
ii  mlnx-ofed-kernel-dkms                      23.04-OFED.23.04.0.5.3.1                   all          DKMS support for mlnx-ofed kernel modules
ii  mlnx-ofed-kernel-utils                     23.04-OFED.23.04.0.5.3.1                   amd64        Userspace tools to restart and tune mlnx-ofed kernel modules
ii  mlnx-tools                                 23.04-0.2304053                            amd64        Userspace tools to restart and tune MLNX_OFED kernel modules
ii  mpitests                                   3.2.20-de56b6b.2304053                     amd64        Set of popular MPI benchmarks and tools IMB 2018 OSU benchmarks ver 4.0.1 mpiP-3.3 IPM-2.0.6
ii  mstflint                                   4.21.0-7                                   amd64        mstflint - Mellanox firmware burning tools
ii  openmpi                                    4.1.5rc2-1.2304053                         all          Open MPI
ii  opensm                                     5.15.0.MLNX20230417.d84ecf64-0.1.2304053   amd64        An Infiniband subnet manager
ii  opensm-doc                                 5.15.0.MLNX20230417.d84ecf64-0.1.2304053   amd64        Documentation for opensm
ii  perftest                                   23.04.0-0.23.g63e250f.2304053              amd64        Infiniband verbs performance tests
ii  rdma-core                                  2304mlnx44-1.2304053                       amd64        RDMA core userspace infrastructure and documentation
ii  rdmacm-utils                               2304mlnx44-1.2304053                       amd64        Examples for the librdmacm library
ii  rshim                                      2.0.6-19.g0873acd.2304053                  amd64        driver for Mellanox BlueField SoC
ii  sharp                                      3.3.0.MLNX20230417.ec919ce9-1.2304053      amd64        SHArP switch collectives
ii  srp-dkms                                   23.04-OFED.23.04.0.5.3.1                   all          DKMS support fo srp kernel modules
ii  srptools                                   2304mlnx44-1.2304053                       amd64        Tools for Infiniband attached storage (SRP)
ii  ucx                                        1.15.0-1.2304053                           amd64        Unified Communication X

Hello @jounghoolee,

Thank you for posting your query on our community. Please verify that PFC has been enabled on the correct priority using this command → # mlnx_qos -i

If not, use the below command to enable priority.
For example, to enable priority 3 → # mlnx_qos -i -f 0,0,0,1,0,0,0,0

More information on this command can be found here → https://nvid.nvidia.com/espContent/index.html?type=article&id=mlnx-qos

If the issue still persists, I would like to request you to submit a support ticket and provide a snapshot for further troubleshooting.

The support ticket can be opened by emailing " Networking-support@nvidia.com ". Please note that an active support contract would be required for the same. For contracts information, please feel free to reach out to our contracts team at " Networking-Contracts@nvidia.com "

Thanks,
Bhargavi

Apologies, the commands have been cropped in my previous update. The commands would be -

# mlnx_qos -i interface_name
# mlnx_qos -i interface_name -f 0,0,0,1,0,0,0,0

Thanks,
Bhargavi

Thank you Sribhargavid,

I have followed all the instructions carefully, including mlnx_qos -i *interface_name* -f 0,0,0,1,0,0,0,0. It still did not work without VLAN.

However, I found a workaround by configuring VLAN with ID 0.
sudo ip link add link <ethX> name <ethX>.0 type vlan id 0

Anyway you are saying ConnectX officially supports PFC without VLAN? (i.e., no need for such thing I have mentioned)

P.S. Thank you for support ticket information. mlnx-qos URL you have posted is not accessible.