I read somewhere that mellanox ml2 driver is supposed to replace the openvswitch agent on compute nodes (if one wants to use the SR-IOV technology on mellanox cards). However, I was required to install both openstack-neutron-openvswitch and mellanox packages (ewswitchd and openstack-neutron-mellanox) on compute nodes.
Can someone please clarify as to why is openvswitch still required on compute nodes? Is it just there to receive messages from controller and pass it on to eswitchd?
I will appreciate if someone can point me to any documents that talk about this.
I read that - “OpenStack Juno adds inbox support to request VM access to virtual network via SR-IOV NIC.”
Does that mean we don’t even need Mellanox ML2 mechanism driver and corresponding SRIOV agent ? If so, can we just plug in an Intel card that supports SRIOV without needing any Intel specific driver/plugin?
The Modular Layer 2 (ml2) plugin is a framework allowing OpenStack Networking to simultaneously utilize layer 2 networking technologies. It currently works with the existing openvswitch, linuxbridge, and hyperv L2 agents, and is intended to replace and deprecate the monolithic plugins associated with those L2 agents. The ml2 framework is also intended to greatly simplify adding support for new L2 networking technologies, requiring much less initial and ongoing effort than would be required to add a new monolithic core plugin. A modular agent may be developed as a follow-on effort.
I am looking for a more technical explanation of why openvswitch and mellanox plugin agents need to co-exist on compute node. Is the openvswitch agent providing some ‘extra’ services that mellanox agent cannot provide?
My understanding is that both openvswitch and mellanox ml2 agents are mechanism drivers and that is why I am wondering why both are required.
if you wish to create VMs with only SR-IOV vNICs, there is no need to install openvswitch agent on compute nodes. There is no relationship between the two agents or mechanism drivers. The openvswithc or linux bridge agent (depends on chosed Mechanism Driver) should be installed on Network Node to enable L3 and DHCP services.
Does that mean we don’t even need Mellanox ML2 mechanism driver and corresponding SRIOV agent ? If so, can we just plug in an Intel card that supports SRIOV without needing any Intel specific driver/plugin?
Yes, in Juno for a basic SRIOV setup, you don’t need a vendor-specific mechanism driver. I have tested the inbox support for SRIOV in Juno and it seems to work well.
I guess its a leftover code due to be removed soon (Someone more informed can comment).
However, I understand that ML2 mellanox plugin is getting deprecated in the Kilo release.
Can someone pls explain.
My understanding is - since Juno has inbox support for SRIOV, vendor-specific plugins have become redundant so it makes sense to deprecate them in future releases. However, if an adapter provides additional features that inbox driver might not support, it’s vendor would want to contribute to ML2 neutron (ML2 extensions) to fully exploit the adapter. For now it looks like the inbox driver can do whatever Mellanox would like to exploit its adapters for.
SRIOVNicSwitch mechanism driver enable the basic SRIOV capabilities inbox. the SRIOVNicSwitch agent enable capability of disable /enable the VF to a NIC that supports it (like connectX).
the Mech_mlnx mechanism driver of Mellanox is still available to support infiniband SRIOV capabilities.
BTW - in Kilo, all vendor plugins will be extract from neutron core to an external (stackforge) repository.