Connectivity Issues between Two NIC Ports on the Same Machine


I am experiencing connectivity issues between two network interface cards (NICs) on the same server, and I’m unable to establish a ping between them. I’m seeking assistance to resolve this issue.


  • Operating System: Ubuntu 20.04.5 LTS
  • NIC 1: enp130s0np0 (
  • NIC 2: enp3s0f0np0 (
  • NIC 2 Type: Nvidia Bluefield (with embedded CPU configuration)
  • Both NICs are directly connected via an Ethernet cable.
$ ip route
default via dev eno1 proto static 
default via dev eno1 proto dhcp src metric 100 dev eno1 proto kernel scope link src dev eno1 proto dhcp scope link src metric 100 dev docker0 proto kernel scope link src linkdown dev enp130s0np0 proto kernel scope link src dev enp3s0f1np1 proto kernel scope link src dev enp3s0f0np0 proto kernel scope link src dev tmfifo_net0 proto kernel scope link src dev virbr0 proto kernel scope link src linkdown 

Issue Details:

Despite the direct connection, attempts to ping from one NIC to the other result in “Destination Host Unreachable” errors from both ends.

  • Ping from enp3s0f0np0 to enp130s0np0:
PING ( from enp3s0f0np0: 56(84) bytes of data.
From icmp_seq=1 Destination Host Unreachable
  • Ping from enp130s0np0 to enp3s0f0np0:
PING ( from enp130s0np0: 56(84) bytes of data.
From icmp_seq=1 Destination Host Unreachable

Configuration of NIC 2 (Nvidia Bluefield):

  • Mode: Embedded PCU
  • Relevant configurations that might affect connectivity are provided below:


I am looking for guidance on why these connectivity issues might be occurring and how to resolve them. I suspect there might be a configuration or hardware-related issue, especially concerning the special properties of the Nvidia Bluefield card in embedded PCU mode.

Thank you in advance for any suggestions or guidance you can provide!


Thanks for your question.
Most likely the issue is not with the Bluefield adapter itself.

Based on the network configuration you shared, there are a few interfaces on the same server, which are configured with addresses in the same subnet

This configuration may cause various network issues (ARP, routing issues), as the kernel may reply to ARP via wrong interface, and as a result no connectivity.

In this case you may configure a routing table per interface, or just not using same subnet on a few ports on the same server.

Please find the below explanations:

#All the settings are very well documented in below link
#Chapter 2. Working with sysctl and kernel tunables Red Hat Enterprise Linux 7 | Red Hat Customer Portal

#default is usually 0
#value 1 is used if there is static source based routing

in order to force ARPs for each interface be answered based on

#whether or not the kernel would route a packet from the ARP’d IP out that interface
sysctl -w net.ipv4.conf.all.arp_filter=1
sysctl -w net.ipv4.conf.default.arp_filter=1

#value 1 or 2 can work, the difference if the subnet is checked as well
sysctl -w net.ipv4.conf.all.arp_ignore=1
sysctl -w net.ipv4.conf.default.arp_ignore=1

#either 1 or 2, 1 works better as value 2 ignores IP addr in src packet
sysctl -w net.ipv4.conf.all.arp_announce=1
sysctl -w net.ipv4.conf.default.arp_announce=1

#rp_filter can be set to 0 or 2, just not 1
sysctl -w net.ipv4.conf.all.rp_filter=2
sysctl -w net.ipv4.conf.default.rp_filter=2

Best Regards,

1 Like

Thank you very much, Anatoly! I appreciate your detailed advice and will implement these settings to see if they help resolve the issue.