connectx-5 SR-IOV ASAP2 100G performance evaluation using VM's


I have been trying to get performance figures for ASAP2 feature in mellanox connectx-5 SR-IOV ASAP2 100G Ethernet card with OVS and virtual machines.I have tried availability of any official whitepaper or performance figures published by mellanox, but i am unable to find any such documentations.Please share links if anyone knows encounter some performance figures for ASAP2.

Also i tried to do a performance bench marking on connectx-5 based on user manual provided by mellanox and a article in mellanox community. following are the links accordingly.

Based on first link i am able to follow correctly up-to passing PCI-VF to VM(section 1.3, step-10). In this step VF passing to QEMU is not working working with “vfio-pci” interface. Anyone know how to pass VF to QEMU VM for mellanox VF.I have been using following command.

modprobe vfio-pci

-device vfio-pci,host=01:00.4 - for qemu.

Second link is using openstack for setting up VM’s. i have successfully completed PCI pass-through to VM and it is detected as net devices in ifconfig.When we tried to ping with and without static IP,ping is not happening and no flows are inserted(as per doc, a successful ping will automatically insert flows).

note : able to boot vm with following port creation command in doc

"neutron port-create --binding:vnic_type=direct --binding-profile ‘{“capabilities”: [“switchdev”]}’ private


but vm is not booting when we use following command mentioned in DOC(This command is also used in one of the mellanox demo videos)

"neutron port-create --binding:vnic_type=direct


when using vnic_type=direct following error happening in openstack logs

“ERROR nova.compute.manager [instance: 6e896301-9616-4de2-a1f8-c08b8591e8d7] PortBindingFailed: Binding failed for port 3db5a9b5-3a7b-47c9-a3d9-9167c4780fc1, please check neutron logs for more information.”

but with vnic_type=direct --binding-profile ‘{“capabilities”: [“switchdev”]}’ private VM creation is happening.

Please share any already done performance figures for ASAP2 on 100G links.If anyone have solution for above problems or alternative links for setup, please share.Thanks in advance.

Here are two references, there should be more:


Benchmarks show that on a server with a 25G interface, OVS accelerated by ASAP2 Direct achieves 33 million packets per second (mpps) for a single flow with near-zero CPU cores consumed by the OVS data plane, and about 18 mpps with 60,000 flows performing VXLAN encap/decap


Regarding the second question, do you have the logs (dmesg, QEMU, OVS, etc) that can show the failure? Does it work if you use the gui to assign a PCI VF device to VM?

Regarding the third question, lets concentrate on the second first.