DPDK 21.11 : coexist capability of pmd with kernel network interfaces

I am targeting to capture only udp data over ipv4 using ConnectX5 device and DPDK dpdk-21.11.1.
Card Details:
Device Type: ConnectX5
Part Number: MCX512A-ACA_Ax_Bx
Description: ConnectX-5 EN network interface card; 10/25GbE dual-port SFP28; PCIe3.0 x8; tall bracket; ROHS R6
PSID: MT_0000000080
PCI Device Name: 0000:03:00.0
Base GUID: 043f720300b05496
Base MAC: 043f72b05496
Versions: Current Available
FW 16.31.1014 N/A
PXE 3.6.0403 N/A
UEFI 14.24.0013 N/A

As per my understanding, mlx5_core allows the PMD to coexist with kernel network interfaces which remain functional (https://doc.dpdk.org/guides/nics/mlx5.html).

So, in our code as soon as we call rte_eth_dev_start(port_num), we are able to capture the udp and other data packets available over the interface.
But, now the problem is that the dpdk pmd is also capturing igmp, arp and icmp packets as well and the data is not available to the corresponding kernel network interface.
This in turn further leads to the problem that we are not able to ping this interface from the other host(s) connected in the network.
So, is there a way in dpdk mlx5_core pmd, we can skip all other types of data packets except UDP, so that the kernel network interface will take care of icmp, arp and igmp packets?

Hi,

Mellanox PMD allows for bifurcation of traffic between user and kernel space, so that specific types of packets can be processed either by DPDK applications or by the Linux kernel stack. By default, packets that do not match any flow rules will be processed by the kernel stack, unless explicitly specified otherwise in a flow rule.
You can read more about flow bifurcation in this link:
https://dpdk-power-docs.readthedocs.io/en/latest/howto/flow_bifurcation.html?highlight=kernel%20stack

Best regards,
Chen

Thanks for the pointer, sir. Now, when I call rte_flow_isolate() before rte_eth_dev_configure(), a warning is thrown while enabling all multicast mode. I need to capture the high bitrate multicast UDP data using Mellanox PMD, I have added a corresponding flow. In this case, I am unable to capture any multicast data.

If I remove rte_flow_isolate from my code and call enableallmulticast(), I am able to capture the multicast packets. But in this second approach, as I mentioned, the problem is that all igmp Membership query messages get captured by PMD, and now PMD, we do not send any response to this query, eventually, the switch drops the multicast stream. I need to pass this IGMP message to the Linux kernel.
A few more pointers from your side may help me a lot.
Thanks.

Dear Community Members,
I am seeking further guidance or insights on this matter. If there are alternative approaches or additional suggestions to resolve this issue and successfully pass the IGMP message to the Linux kernel while maintaining the ability to capture multicast packets, I would greatly appreciate your expertise.

Thank you in advance for your time and assistance.