VLAN aware Linux bridging is not functional on ConnectX4LX card unless manually put in promiscuous mode

When an adapter is configure for vlan-aware linux bridging, all traffic stops flowing on the bridge. Both, untagged and tagged traffic is affected. The same configuration works as it should with non mellanox cards.

To restore traffic flow on vlan-aware bridge, Mellanox card needs to be manually put in promiscuous mode by issuing: “ip link set dev ens6f0np0 promisc on”

The second interface on the same card on NON vlan-aware bridge enters promisc mode automatically once added to the bridge with no user interaction.

Any ideas will be appreciated.

Configuration details:

OS: Debian 10 Buster + ProxMox (PVE 6.4), latest 5.12.12 Linux kernel


root@pve-bfs-1:~# mlxfwmanager

Querying Mellanox devices firmware …

Device #1:

Device Type: ConnectX4LX

Part Number: MCX4121A-ACU_Ax

Description: ConnectX-4 Lx EN network interface card; 25GbE dual-port SFP28; PCIe3.0 x8; UEFI Enabled; tall bracket

PSID: MT_0000000266

PCI Device Name: /dev/mst/mt4117_pciconf0

Base MAC:

Versions: Current Available

FW 14.30.1004 14.30.1004

PXE 3.6.0301 3.6.0301

UEFI 14.23.0017 14.23.0017

Status: Up to date


root@pve-bfs-1:~# modinfo mlx5_core

filename: /lib/modules/5.12.12-1-edge/kernel/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.ko

license: Dual BSD/GPL

description: Mellanox 5th generation network adapters (ConnectX series) core driver

author: Eli Cohen eli@mellanox.com

srcversion: E69AFAD4870C439C8F80D4C

alias: auxiliary:mlx5_core.eth

alias: pci:v000015B3d0000A2DCsvsdbcsci*

alias: pci:v000015B3d0000A2D6svsdbcsci*

alias: pci:v000015B3d0000A2D3svsdbcsci*

alias: pci:v000015B3d0000A2D2svsdbcsci*

alias: pci:v000015B3d00001021svsdbcsci*

alias: pci:v000015B3d0000101Fsvsdbcsci*

alias: pci:v000015B3d0000101Esvsdbcsci*

alias: pci:v000015B3d0000101Dsvsdbcsci*

alias: pci:v000015B3d0000101Csvsdbcsci*

alias: pci:v000015B3d0000101Bsvsdbcsci*

alias: pci:v000015B3d0000101Asvsdbcsci*

alias: pci:v000015B3d00001019svsdbcsci*

alias: pci:v000015B3d00001018svsdbcsci*

alias: pci:v000015B3d00001017svsdbcsci*

alias: pci:v000015B3d00001016svsdbcsci*

alias: pci:v000015B3d00001015svsdbcsci*

alias: pci:v000015B3d00001014svsdbcsci*

alias: pci:v000015B3d00001013svsdbcsci*

alias: pci:v000015B3d00001012svsdbcsci*

alias: pci:v000015B3d00001011svsdbcsci*

alias: auxiliary:mlx5_core.eth-rep

alias: auxiliary:mlx5_core.sf

depends: tls,pci-hyperv-intf,mlxfw

retpoline: Y

intree: Y

name: mlx5_core

vermagic: 5.12.12-1-edge SMP mod_unload modversions

parm: debug_mask:debug mask: 1 = dump cmd data, 2 = dump cmd exec time, 3 = both. Default=0 (uint)

parm: prof_sel:profile selector. Valid range 0 - 2 (uint)


auto lo

iface lo inet loopback

iface ens6f0np0 inet manual

iface ens6f1np1 inet manual

mtu 9000

auto vmbr0

iface vmbr0 inet static



bridge-ports ens6f0np0

bridge-stp off

bridge-fd 0

bridge-vlan-aware yes

bridge-vids 2-4094

auto vmbr1

iface vmbr1 inet static


bridge-ports ens6f1np1

bridge-stp off

bridge-fd 0

mtu 9000

Hi Andrew,

Thank you for posting your question on our community.

Based on the information shared, you are running Inbox driver(one that comes default with the OS). In that case, it would be great if you could reach out to OS vendor.

To test with MLNX OFED driver we provide, please install a supported OS and kernel version. Currently, MLNX OFED doesn’t support OS: Debian 10 Buster + ProxMox (PVE 6.4).

In order to install the OFED driver for one of the supported OS, please visit → https://www.mellanox.com/products/infiniband-drivers/linux/mlnx_ofed

To check the list of Supported OS, please visit → https://docs.mellanox.com/display/MLNXOFEDv541030/General+Support#GeneralSupport-SupportedOperatingSystems

Apart from the kernel versions listed in the above link, we also support upstream vanilla kernel without any customizations to it → https://www.mellanox.com/products/adapter-software/ethernet/inbox-drivers

It would be great if you can validate your results with MLNX OFED.