We’re looking to setup KVM with accelerated virtio through vDPA but this does not seem to be supported on the MCX75310AAS-NEA_Ax connect-x 7 400G NIC.
All steps taken as per documentation to add VF’s, unbind VF interfaces from mlx5_core driver, change NIC to switchdev mode and rebind VF’s back to mlx5_core driver show no output from:
vdpa mgmtdev show
We’ve recently been able to find an older Connectx-6 DX card (MCX623106AC-CDA_Ax) and upon taking the same steps as above are able to see the following from vdpa mgmtdev show:
pci/0000:40:00.2:
supported_classes net
max_supported_vqs 65
dev_features CSUM GUEST_CSUM MTU HOST_TSO4 HOST_TSO6 STATUS CTRL_VQ CTRL_VLAN MQ CTRL_MAC_ADDR VERSION_1 ACCESS_PLATFORM
pci/0000:40:00.3:
supported_classes net
max_supported_vqs 65
dev_features CSUM GUEST_CSUM MTU HOST_TSO4 HOST_TSO6 STATUS CTRL_VQ CTRL_VLAN MQ CTRL_MAC_ADDR VERSION_1 ACCESS_PLATFORM
pci/0000:40:10.1:
supported_classes net
max_supported_vqs 65
dev_features CSUM GUEST_CSUM MTU HOST_TSO4 HOST_TSO6 STATUS CTRL_VQ CTRL_VLAN MQ CTRL_MAC_ADDR VERSION_1 ACCESS_PLATFORM
pci/0000:40:10.2:
supported_classes net
max_supported_vqs 65
dev_features CSUM GUEST_CSUM MTU HOST_TSO4 HOST_TSO6 STATUS CTRL_VQ CTRL_VLAN MQ CTRL_MAC_ADDR VERSION_1 ACCESS_PLATFORM
DPDK documentation only lists the connect-x 7 200G variant as supported as of the latest 25.03 release but why not the 400G option?
Are there plans to add this functionality to Connectx-7 400G? What alternative to vdpa for accelerated virtio is available with this NIC?