OpenStack Neutron Mellanox ML2 Driver Configuration in Bright

This article below describes how to use the Mellanox Neutron ML2 Mechanism Driver to add Mellanox InfiniBand support to an existing standard OpenStack Icehouse cloud deployment (running with ML2 + LinuxBridge Mechanism Driver + VLAN-based network isolation) managed with Bright Cluster Manager 7.0.

The end result is extending the capabilities of the OpenStack private cloud with the ability to create OpenStack networks backed with isolated segments of the InfiniBand fabric, and the ability to then spawn OpenStack VMs which have direct access to those networks/segments via a dedicated (passthrough) virtual IB device (exposed via SR-IOV). Functionality wise such a VM will have an IPoIB network device, direct access to the IB fabric segment (e.g. for running MPI jobs with other machines attached to this segment), and also optionally regular virtual Ethernet devices connected to a VLAN-backed OpenStack networks. Users will be able to pick whether they want to create VMs attached to the InfiniBand-backed networks (IPoIB), VLAN/VXLAN-backed isolated Ethernet networks, Flat (shared) cluster internal networks, or any combination of those.

This article focuses on enabling IB functionality alongside the pre-existing regular VLAN-based network isolation. However, it’s also possible to follow majority of this document to configure IB functionality for OpenStack deployments running VXLAN-based network isolation (some tips on how to do that are included in the text).

http://kb.brightcomputing.com/faq/index.php?lang=en&action=artikel&cat=5&id=238&artlang=en http://kb.brightcomputing.com/faq/index.php?lang=en&action=artikel&cat=5&id=238&artlang=en