Attach the MCX4131A-GCAT 1x50G NIC to the X86 platform, setup SR-IOV to enable the DPDK testpmd testing at GuestOS.
From the testing, found that the 64B can’t achieve the same performance as baremetal environment (30% gap).
Similar results at the ARM platform.
Was it because of the NIC didn’t support the 64B processing at SR-IOV mode?
Is there any special settings should try?
Already set the inte_iommu=on, iommu=pt at Host OS.
Host Kernel: Linux 3.10.0-957.el7.x86_64
virsh # version
Compiled against library: libvirt 4.5.0
Using library: libvirt 4.5.0
Using API: QEMU 4.5.0
Running hypervisor: QEMU 1.5.3
mlxconfig -d 65:00.0 set NUM_OF_VFS=1 SRIOV_EN=1 CQE_COMPRESSION=1
lspci at Host OS:
65:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]
65:00.1 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function]
Guest OS xml cfg:
Please refer to the following performance report by Mellanox for DPDK 19.08
For DPDK 19.05
There are few examples for ConnectX-4/5.
Thanks for your quick response.
I didn’t see the MCX4131A-GCAT 1x50G SR-IOV over KVM from all the published Mellanox NIC performance reports. Only saw the baremetal test results with 2 NICs at 2x40G mode.
Would like to confirm with Mellanox, does this specific NIC not support 64B well at 1x50G SR-IOV mode?
Please review the below Release notes for all supported features by mlx5
Also in the user manual “SR-IOV is supported”
Regarding the NIC review the documentation.
Thanks for providing the inputs.
Yes. This NIC card does support the SR-IOV feature.
My question here is I have tried all the performance optimizations the user manual has listed.
32.10. Performance tuning
But the SR-IOV performance throughput (64B) dropped 30% comparing with baremetal performance.
Is there any more settings I could try?
I see that you have a valid support contract with Mellanox
Please open a support case for deep debug at firstname.lastname@example.org.