Determining the Maximum Value of Combined Channel Based on sriov_numvfs in Guest OS

Hello,

I am currently working in the following environment:

Hardware

Item Details
Server Dell R7615
CPU AMD Epyc 9654P
Memory 384GB
NUMA 1
NIC Connect-X 6LX

Software Versions

Item Details
OS Ubuntu 22.04.2 LTS
Kernel 5.15
Openstack Version Yoga
OVN 22.03
OVS 2.17.5
MLNX OFED Driver 5.8-2.0.3
Firmware 26.35.1012 (DEL0000000031)

SmartNIC Configuration

Item Details
Driver - vport_match_mode metadata
Driver - steering_mode smfs
Driver - ct_action_on_nat_conns disable
Driver - ct_labels_mapping disable
Devlink Param - num_of_groups 15
MSTConfig - PF_NUM_PF_MSIX_VALID False(0)
MSTConfig - STRICT_VF_MSIX_NUM False(0)
MSTConfig - NUM_PF_MSIX_VALID True(1)
MSTConfig - NUM_PF_MSIX 127
MSTConfig - NUM_VF_MSIX 127
MSTConfig - PF_NUM_PF_MSIX 63
MSTConfig - DYNAMIC_VF_MSIX_TABLE False(0)

It appears that the maximum value of the Combined Channel in the Guest OS is determined by the sriov_numvfs. Is there a specific formula for determining the Combined Channel in the Guest OS?

Here are the test results of the Combined Channel value depending on the sriov_numvfs value:

NUM_VF_MSIX sriov_numvfs Combined Channel Pre-set
127 48 9
127 32 13
127 20 23
127 15 29
127 14 31

Thank you in advance for your help.

Best regards,

Hi Kyoon,
What do you mean the ‘combined Channel’ here?
Could you list the query command?

Regards,
Levei

Hi Levei,

The ‘combined channel’ I mentioned refers to the combined setting of the transmit (TX) and receive (RX) queues in a network interface card (NIC). This setting is often referred to as ‘Receive Side Scaling (RSS)’ or ‘Multiqueue’ in the context of NIC configurations.

The command to check the combined channel setting varies depending on the operating system and the network driver in use. On a Linux system with an Ethernet interface (for example, eth0), you can check the combined channel setting with the following command:

bashCopy code

ethtool -l eth0

This command will display the current and maximum settings for the TX, RX, and combined channels.

Please let me know if you need further clarification.

Best, Kyoon

That’s funny.
The value of combined shouldn’t be changed by sriov_numvfs.
But let me check more first.

Do you change the sriov_numvfs like below, then see the combined value is changed?

echo 48 > /sys/class/net/eth0/device/sriov_numvfs
ethtool -l eth0

echo 0 > /sys/class/net/eth0/device/sriov_numvfs
echo 32 > /sys/class/net/eth0/device/sriov_numvfs
ethtool -l eth0

Levei

Hi Levei,

Thank you for your response. I apologize if there was any confusion, but I’m not referring to the combined channel (Multi Queue) of the PF.
I’m referring to the Multi Queue within the VM that is using the VF created through the specification of sriov_numvfs.

As you mentioned, it wouldn’t make sense for the Multi Queue to change when adjusting the VF settings.
From what I understand, the total limit of Multi Queue that can be used by the VF is set by NUM_VF_MSIX.
Currently, NUM_VF_MSIX is set to its maximum value of 127.
However, as shown below, the limit of Multi Queue varies depending on the number of VFs.

NUM_VF_MSIX sriov_numvfs Combined Channel Pre-set
127 48 9
127 32 13
127 20 23
127 15 29
127 14 31

here was a missing detail in the environment I previously described. It’s not just an environment that uses SR-IOV, but also one that utilizes HWOL and VF-LAG. When changing the number of VFs, a reboot is required due to the bond configuration, hence not all test cases could be conducted.

When setting a different number of VFs and creating a VM using the same 32 cores, the maximum Multi Queue that can be used in the VM changes depending on the number of VFs created.

I hope this clarifies my question. I look forward to your further insights.

Best regards,
kyoon

I’m sharing the information you requested.
VF Interface Information

root@Qaamdhost02:# ip l show enp198s0f1np1
8: enp198s0f1np1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 3e:e3:80:46:9f:52 brd ff:ff:ff:ff:ff:ff permaddr 10:70:fd:6e:5d:43
    vf 0     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 1     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 2     link/ether fa:16:3e:91:bb:0f brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 3     link/ether fa:16:3e:31:d3:63 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 4     link/ether 06:b1:c3:26:71:93 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 5     link/ether fa:16:3e:40:bd:7c brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 6     link/ether fa:16:3e:66:8e:dd brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 7     link/ether fa:16:3e:f2:e8:81 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 8     link/ether fa:16:3e:0d:04:6e brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 9     link/ether 8e:ff:69:98:5b:d1 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 10     link/ether fa:16:3e:f2:49:9f brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 11     link/ether fa:16:3e:16:58:a2 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 12     link/ether fa:16:3e:93:08:96 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 13     link/ether fa:16:3e:c3:f9:4f brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off

** PF multi queue**

root@Qaamdhost02:# ethtool -l enp198s0f1np1
Channel parameters for enp198s0f1np1:
Pre-set maximums:
RX:             n/a
TX:             n/a
Other:          n/a
Combined:       127
Current hardware settings:
RX:             n/a
TX:             n/a
Other:          n/a
Combined:       4

** representation port multi queue**

root@Qaamdhost02:# ethtool -l enp198s0f1vf13
Channel parameters for enp198s0f1vf13:
Pre-set maximums:
RX:             n/a
TX:             n/a
Other:          n/a
Combined:       127
Current hardware settings:
RX:             n/a
TX:             n/a
Other:          n/a
Combined:       2

** Guest port multi queue**

root@amd:# ethtool -l ens4np0
Channel parameters for ens4np0:
Pre-set maximums:
RX:             0
TX:             0
Other:          0
Combined:       31
Current hardware settings:
RX:             0
TX:             0
Other:          0
Combined:       31

Guest lscpu

root@amd:# lscpu
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   52 bits physical, 57 bits virtual
CPU(s):                          32
On-line CPU(s) list:             0-31
Thread(s) per core:              1
Core(s) per socket:              32
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       AuthenticAMD

Hi Levei,

I have been trying to understand the relationship between NUM_VF_MSIX and NUM_OF_VFS through the following document:
https://docs.nvidia.com/networking/display/ConnectX6LxFirmwarev26311014/Changes+and+New+Features

The document explains the correlation between the number of MSIX per VF (NUM_VF_MSIX) and the number of VFs (NUM_OF_VFS). However, it does not provide information on how these values are set according to any specific formula. Here’s the relevant excerpt:

“Note that increasing the number of MSIX per VF (NUM_VF_MSIX) affects the configured number of VFs (NUM_OF_VFS). The firmware may reduce the configured number of MSIX per VF and/or the number of VFs with respect to maximum number of MSIX vectors supported by the device (MAX_TOTAL_MSIX).”

If anyone could provide insight into this, it would be greatly appreciated.