How to steer all traffic from mlx4_en into DPDK user land for Linux3.10.88-azure.

I have a Linux instance in Azure cloud for DPDK 18.11 using mlx4-vf SRIOV. DPDK depends on rte flow to steer certain traffic to user land DPDK app like testpmd; while left-over traffic handled by mlx4_en driver to enter default Linux net stack. Even though when DPDK tried to create flow into the VF to steer certain traffic, somehow it always failed as follows:

<mlx4_ib> __mlx4_ib_create_flow: mcg table is full. Fail to register network rule.

Wonder which is wrong there (though Linux 4.2+ supports tc-flower, I am using lower version of Linux short of tc-flower and I have shortcut certain flow creation along DPDK-testpmd code startup.)

Environment: Linux-3.10.88+azure; DPDK18.11 mlnx-ofed-kernel-4.4 rdma-core-43mlnx1

Essentially currently as I failed to bring up the flow, all traffic are passing along the slave sr-iov interface into netvsc master interface into Linux net stack, only a very few multicast packets are received by testpdm/DPDK.

What I am really looking is a way to redirect all (non-vsc) traffic away from default mlx4_en into DPDK. (Likely a simple match-all flow for DPDK, or put DPDK rxq as default for the vf interface.?)

Many thanks for clues.

Liwu

Hello Liwu,

Thank you for posting your question on the Mellanox Community.

As you mentioned that you are running DPDK in a VM, please follow the recommendation provided in the following link → https://docs.microsoft.com/en-us/azure/virtual-network/setup-dpdk#failsafe-pmd

It provides the information that the DPDK application needs to run over the failsafe PMD. If the application runs directly over the VF PMD, it doesn’t receive all packets that are destined to the VM, since some packets show up over the synthetic interface.

For the full post, see https://docs.microsoft.com/en-us/azure/virtual-network/setup-dpdk

If you have any additional questions, please reach out to Microsoft for more detailed instructions on how-to run DPDK in an Azure VM.

Thanks and regards,

~Mellanox Technical Support