Is there a network adapter with floating veb function?


Is there a network adapter with floating veb function in mellanox?

Floating veb is used in intel x710 network adapter(xl710-10-40-controller-datasheet.pdf),as follows Floating VEB

A floating VEB is a VEB not connected to the network, enabling only local traffic between the VSIs

members of this VEB. This is useful if one of the VSIs acts as a gateway to another network for all the

other VSIs within the floating VEB. It can be used to isolate a set of VSIs behind a firewall or to

implement NFV functionalities.

A floating VEB is created by setting the Floating VEB flag and setting a Downlink SEID of zero in the Add

VEB command. The AQ response returns the switch ID used for this floating VEB.

Traffic in a floating VEB is identified using a special S-tag inserted and removed by hardware. Hence,

the Cascaded Port Virtualizer section is valid bit should be cleared in floating VEB VSIs. Firmware

internally sets:

• S-tag = Switch ID of the floating VEB

• Switch ID = Switch ID of the floating VEB

• S-tag extract mode = 01b

• S-tag insert enable = 1b

• Accept tag from host = 0b

In addition, the Allow Loopback flag should be set.

Note:A VSI in a floating VEB can still bypass the switch if allowed via the SWTCH flag in the Tx descriptor. A packet sent to the LAN via this mechanism goes out on the VEB with the floating VEB internal S-tag. This is not an expected use case, as only trusted VSIs are allowed to use the SWTCH flag.

As VSIs on a floating VEB adds and remove an S-tag, the RXMAX value of the Rx queues in these VSIs should be updated to account for the additional four bytes.

Although a floating VEB creates an isolated environment for the PF, it cannot completely disconnect the PF from the network. Attempting to call the Delete Element AQ (0x0243) for VSI, connected directly to port, results in an error.


The basic features for operating floating VEB over Mellanox adapters exist:

- We support SR-IOV VFs on a Mellanox adapter “binded” to VMs, thus, enabling local traffic between the VMs

- VFs can be up inspire of the physical NIC being down (no link connection outside) and traffic from VF to VF runs even when the physical link is down

- local switching between virtual endpoints within a physical endpoint can be implemented and remains within the host.

try it and share with us your findings