Query on mellanox ml2 mechanism driver

Hi!

I read somewhere that mellanox ml2 driver is supposed to replace the openvswitch agent on compute nodes (if one wants to use the SR-IOV technology on mellanox cards). However, I was required to install both openstack-neutron-openvswitch and mellanox packages (ewswitchd and openstack-neutron-mellanox) on compute nodes.

Can someone please clarify as to why is openvswitch still required on compute nodes? Is it just there to receive messages from controller and pass it on to eswitchd?

I will appreciate if someone can point me to any documents that talk about this.

Thanks!

Sorry I meant why we still need:

neutron/plugins/ml2/drivers/mlnx/mech_mlnx.py

no, it won’t. however we are working on developing some QoS capabilities in the near future.

Nurit/Nishant,

Do we still need “pci_alias” configuration with Juno release ? I guess not.

Thanks

ok cool thanks

Hi,

I read that - “OpenStack Juno adds inbox support to request VM access to virtual network via SR-IOV NIC.”

Does that mean we don’t even need Mellanox ML2 mechanism driver and corresponding SRIOV agent ? If so, can we just plug in an Intel card that supports SRIOV without needing any Intel specific driver/plugin?

Thanks.

Nurit,

Since SriovNicAgent can only VF enable/disable, it won’t be able to do any other VF config right ? (eg: MTU or RSS)

Thx

Harish

Hi,

The Modular Layer 2 (ml2) plugin is a framework allowing OpenStack Networking to simultaneously utilize layer 2 networking technologies. It currently works with the existing openvswitch, linuxbridge, and hyperv L2 agents, and is intended to replace and deprecate the monolithic plugins associated with those L2 agents. The ml2 framework is also intended to greatly simplify adding support for new L2 networking technologies, requiring much less initial and ongoing effort than would be required to add a new monolithic core plugin. A modular agent may be developed as a follow-on effort.

Hope thats helps,

Thanks Nishant. That was my feeling too, though I haven’t tested it.

If that is the case:

  1. Not sure why is the Mellanox specific ML2 mechanism driver is still available under:

neutron/plugins/ml2/drivers/mech_sriov/mech_driver.py.

  1. However, I understand that ML2 mellanox plugin is getting deprecated in the Kilo release.

Can someone pls explain.

Thanks

Harish

Hi Rian,

Thanks for your response. I already saw that definition here:

https://wiki.openstack.org/wiki/Neutron/ML2 https://wiki.openstack.org/wiki/Neutron/ML2

I am looking for a more technical explanation of why openvswitch and mellanox plugin agents need to co-exist on compute node. Is the openvswitch agent providing some ‘extra’ services that mellanox agent cannot provide?

My understanding is that both openvswitch and mellanox ml2 agents are mechanism drivers and that is why I am wondering why both are required.

Thanks!

“There is no relationship between the two agents or mechanism drivers”

Irena,

Thanks for confirming this. I verified it by uninstalling the openvswitch agent from compute node, it still works!

I am doing these experiments to make sure that I run only those packages that are absolutely necessary.

Hi Nishant,

if you wish to create VMs with only SR-IOV vNICs, there is no need to install openvswitch agent on compute nodes. There is no relationship between the two agents or mechanism drivers. The openvswithc or linux bridge agent (depends on chosed Mechanism Driver) should be installed on Network Node to enable L3 and DHCP services.

Does that mean we don’t even need Mellanox ML2 mechanism driver and corresponding SRIOV agent ? If so, can we just plug in an Intel card that supports SRIOV without needing any Intel specific driver/plugin?

Yes, in Juno for a basic SRIOV setup, you don’t need a vendor-specific mechanism driver. I have tested the inbox support for SRIOV in Juno and it seems to work well.

Sorry I meant why we still need:

neutron/plugins/ml2/drivers/mlnx/mech_mlnx.py

I guess its a leftover code due to be removed soon (Someone more informed can comment).

However, I understand that ML2 mellanox plugin is getting deprecated in the Kilo release.

Can someone pls explain.

My understanding is - since Juno has inbox support for SRIOV, vendor-specific plugins have become redundant so it makes sense to deprecate them in future releases. However, if an adapter provides additional features that inbox driver might not support, it’s vendor would want to contribute to ML2 neutron (ML2 extensions) to fully exploit the adapter. For now it looks like the inbox driver can do whatever Mellanox would like to exploit its adapters for.

OK thanks. Yes if someone from development community can confirm that then it would be great. Can I request Irene to comment?

Harish

Hi Nishant, and Harish,

SRIOVNicSwitch mechanism driver enable the basic SRIOV capabilities inbox. the SRIOVNicSwitch agent enable capability of disable /enable the VF to a NIC that supports it (like connectX).

the Mech_mlnx mechanism driver of Mellanox is still available to support infiniband SRIOV capabilities.

BTW - in Kilo, all vendor plugins will be extract from neutron core to an external (stackforge) repository.

Nurit

Great thanks Nurit for the confirmation. I may have some follow-up questions too, hope you don’t mind.

Harish

Do you mean in a conf file?

I had to set “supported_pci_vendor_devs” in /etc/neutron/plugins/ml2/ml2_conf_sriov.ini and “pci_passthrough_whitelist” in /etc/nova/nova.conf