Dropping lots of UDP packets with simple TX1 configuration

I have an application that receives multicast packets. If I direct connect the receiver to the transmitter, it receives everything fine. If I add a switch between the transmitter and receiver, it drops most packets.

I realize that multicast isn’t guaranteed, however, this is behaving strange. The transmitter send ~250 multicast packets that are 1490 bytes in a burst that takes about 4 microseconds, 10 times every second. I receive the first ~80 packets, drop the other ~170 each time.

I also tested my code on openSUSE and connected to the same router and it worked fine. Then I tested with Windows and it worked fine. So, I am starting to think my Ubuntu is misconfigured. Especially given the low data rate.

Finally, I should mention that I can send the UDP packets from the TX1 to another computer just fine, I get all the packets. When I switch the configuration and try to received on the TX1, my packets are dropped. (Most of them are dropped).

Any help would be appreciated.

If it helps, the following is the code used to create the multicast socket and receive (on Windows I used boost):

// Create the socket
int socket = socket(AF_INET, SOCK_DGRAM, 0);
if (!socket) throw std::runtime_error(strerror(errno));

int reuse = 1;
if (setsockopt(socket, SOL_SOCKET, SO_REUSEADDR, (const char*)&reuse, sizeof(reuse)) < 0)
{
throw std::runtime_error(strerror(errno));
}

struct sockaddr_in endpoint;
memset(&endpoint, 0, sizeof(endpoint));
local.sin_family = AF_INET;
local.sin_port = htons(port);
local.sin_addr.s_addr = INADDR_ANY;
if (bind(socket, (struct sockaddr*)&endpoint, sizeof(endpoint)) < 0)
{
throw std::runtime_error(strerror(errno));
}

struct ip_mreq group;
group.imr_multiaddr.s_addr = inet_addr(ip.c_str());
group.imr_interface.s_addr = INADDR_ANY;
if (setsockopt(socket, IPPROTO_IP, IP_ADD_MEMBERSHIP, (const char*)&group, sizeof(group)) < 0)
{
throw std::runtime_error(strerror(errno));
}

int recvBufferSize = 8 * 1024 * 1024;
if (setsockopt(socket, SOL_SOCKET, SO_RCVBUF, (const char*)&recvBufferSize, sizeof(recvBufferSize)) < 0)
{
throw std::runtime_error(strerror(errno));
}

// receive data
socklen_t addrLen = sizeof(endpoint);
const ssize_t rc = recvfrom(socket, msg, size, 0, (struct sockaddr*)endpoint, &addrLen);
if (rc < 0) throw std::runtime_error(strerror(errno));

The switch itself may be part of the problem, but it’s hard to say. Is the switch a dedicated switch, or is it also acting as a router? Routers tend to be more likely to have a performance issue such as this (there may be additional interactions on a router which relate to MAC address, security, or other filtering and forwarding rules and loads…in particular caches can get in the way and require router reset to clear them and start over).

Specifically losing packets when JTX1 receives though could just be an indication that the receiving program did not process data fast enough. To know more about what goes on, can you describe the switch, and whether it also performs router functions or firewalling? Plus, on the Jetson and the transmit side before and after a failure, can you post the output for that network interface’s “ifconfig” command (make sure to post info on which side was receiving or sending)?

The switch is not the problem. If we reverse the data flow, have the tx1 the sender and the computer the receiver, all the packets are received.

We literally have the simplest configuration I can make. Its a switch with 2 computers hooked into it, the jetson tx1 and an opensuse box. There is no firewall or routing on the switch. It is a netgear switch. We have validated between 2 open suse boxes that they can send and rx udp packets between the two with dropping anything.

ifconfig is working on both sides. Everythign we have checked in to OS indicates that we are not dropping packets.

I agree the reversal working the other way around means the switch is not the problem (and apparently it isn’t a managed switch which eliminates special function configuration issues). I am still interested in seeing the actual output from ifconfig, which includes statistics on packet counts and can also verify if the packets are being dropped for network reasons versus other reasons. Regardless of the likelihood of the problem being elsewhere, it’s somewhat of an easy and mandatory part of debugging to eliminate that first.

Have you looked at bumping up kernel buffer sizes? Something like:

sysctl -w net.core.rmem_max=26214400

yes, there are actually 4 of these type of lines to improve the buffers and we have all of them maxed out. This doesnt seem to be a buffer issue, but something with the kernel or the device driver. Stand by…

@linuxdev

I have the results. Looks like no packets dropped. I 2 tests, one with a switch in the middle and one without. When the switch is in the middle, packets are dropped but none are reported dropped by the TX1.

What should we do next?

=== RESULTS ===

BEFORE TEST NO SWITCH - tx1 is directly connected to transmit machine.

enx00044b5ac772 Link encap:Ethernet  HWaddr 00:04:4b:5a:c7:72  
          inet addr:10.15.16.37  Bcast:10.15.16.255  Mask:255.255.255.0
          inet6 addr: fe80::204:4bff:fe5a:c772/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:761998 errors:0 dropped:0 overruns:0 frame:0
          TX packets:206 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1133026720 (1.1 GB)  TX bytes:24733 (24.7 KB)

AFTER TEST NO-SWITCH
enx00044b5ac772 Link encap:Ethernet  HWaddr 00:04:4b:5a:c7:72  
          inet addr:10.15.16.37  Bcast:10.15.16.255  Mask:255.255.255.0
          inet6 addr: fe80::204:4bff:fe5a:c772/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1072153 errors:0 dropped:0 overruns:0 frame:0
          TX packets:206 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1594300869 (1.5 GB)  TX bytes:24733 (24.7 KB)
__________________________________________________________________________________________

BEFORE TEST WITH SWITCH - WITH SWITCH shows up putting a switch in the middle between sender computer and the TX1 computer.

enx00044b5ac772 Link encap:Ethernet  HWaddr 00:04:4b:5a:c7:72  
          inet addr:10.15.16.37  Bcast:10.15.16.255  Mask:255.255.255.0
          inet6 addr: fe80::204:4bff:fe5a:c772/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:1072153 errors:0 dropped:0 overruns:0 frame:0
          TX packets:208 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1594300869 (1.5 GB)  TX bytes:24841 (24.8 KB)

AFTER TEST WITH SWITCH - it looks as though no packets are dropped.

enx00044b5ac772 Link encap:Ethernet  HWaddr 00:04:4b:5a:c7:72  
          inet addr:10.15.16.37  Bcast:10.15.16.255  Mask:255.255.255.0
          inet6 addr: fe80::204:4bff:fe5a:c772/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1156219 errors:0 dropped:0 overruns:0 frame:0
          TX packets:212 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 

          RX bytes:1719485803 (1.7 GB)  TX bytes:25057 (25.0 KB)

The ifconfig data shows the network stack itself is functioning correctly before and after a multicast test (nothing dropped, truncated, corrupt, etc…the sender could have network collisions with another computer if there were other computers involved, but the switch has no other computers connected and collisions would mean the network would have much bigger and more obvious issues). If the packet is dropped, then it is either in the switch or in the application using the packet. I’m not familiar enough with multicast to debug it, but there are protocols the switch itself has to work with on a lower level to support this (I’m sure ARP behavior has to be multicast aware for the switch to work correctly with multicast).

Along those lines, if you were to check the receive count on the Jetson interface (I assume Jetson is receiving, though I suppose it could be sending), and do not do anything which would change that count, you could then send an exact number of multicast packets and see if the Jetson counted the same number of packets (monitor the change in RX packet count). If packet counts match, then the application is dropping the data; if not, then the switch is dropping the data (we’ve already verify that the network stack on Jetson is not dropping the data, so those are the only two places left for losing the packets).

Given that ARP can be handled independently in the switch depending on direction of packet flow it is still possible the switch is the issue even if it is not a hardware fault. Should it turn out that packets are lost in the switch, then the switch’s ARP handling could be questioned; ARP handling itself is dependent on both computers connected to the switch as well as the switch, so a drop within the switch is not necessarily the switch’s fault, although a quirk in some special case is a reasonable possibility; if this is really a router and not a switch then issues with the switch/router have a much higher probability…is this really a switch, or perhaps is it a router? Should it turn out that the switch is the drop point, then it might require a network protocol analyzer to see what is going on.

If the packet is dropped, then it is either in the switch or in the application using the packet.

Regarding 1, when we run the link in reverse, there are no packets dropped.
Regarding 1, what do you mean by “the application using the packet?” The application never sees the packet and neither does the tx1 board by the packet counts in ifconfig.

Given that ARP can be handled independently in the switch depending on direction of packet flow it is still possible the switch is the issue even if it is not a hardware fault.

Will look into the switchs debug if able.

if this is really a router and not a switch

Its really a switch.

The switch we are using is a cisco smartswitch CG200-8.

We send 100 MBs of data over 12 second via UDP. Using this configuration (same as always)…

Sender Computer --> switch --> tx1

The switch’s logs show that about 71k packets are sent from the computer to the switch, the switch reports all the packets and no losses. The switch’s logs also show that it sent all 71k packets to the TX1.

On the TX1, ifconfig shows we received about 24k packets and reports no errors, dropped, or overrun.

Do you have any other suggestions to check? Otherwise, I will be sending a small tx/rx c++ code for you to test on your systems.

The link working in reverse means physical cabling is correct…ARP packets dealing with MAC address and physical layer routing still matter. It is up to the switch to recognize which MAC addresses to send packets to utilizing ARP, reverse ARP, and its knowledge of multicast rules. Multicast is a known protocol so a switch which handles multicast has built-in logic for that specific case.

This is kind of off the topic, but an example in more common TCP might illustrate a subtle point. TCP uses “nagle” algorithm to determine if a network packet needs to retransmit after a delay limit is passed. Sometimes people suggest that UDP with a user implemented nagle algorithm is the same thing. In a case where the two network nodes are next to each other with no intervening hops this is true. Once you have intervening hops the switches or routers can change things because a custom nagle-over-UDP is not known by the devices at each hop, whereas true TCP has logic built in to each hop because TCP is a known protocol. Retransmits from a nagle algorithm timeout would differ in behavior between UDP over multiple hops versus TCP over multiple hops because of intervening devices having knowledge of TCP but not nagle within UDP.

Multicast is a known protocol. Back in the earlier days of network there were more simple HUBs around, versus a switch. To distinguish, a HUB sends all traffic to all ports without any intervention. A switch is considered superior because it only sends traffic to the port actually using the data. Multicast essentially turns the switch back into a HUB, but only when the switch recognizes and intervenes with this protocol. The way a switch limits traffic to a relevant port is through ARP and reverse ARP; should ARP not function as expected, multicast will also not function as expected. Take out the switch and use either direct connection or a dumb HUB and multicast will be “repaired” if it was the switch’s logic which failed.

Since ifconfig does not see the packet, yet there are no drops, errors, overruns, framing, nor collision errors, it looks like the switch is where the packets go missing (none of the other testing ever indicated any hardware failure, and both ends of the data transfer are known to have the ability to function correctly). If not the switch, then it implies the NIC hardware itself is losing the packets in such a way that the kernel can’t see it…this is unlikely since the same hardware works in other cases. Again, this may require a network analyzer to determine what is really going on, but I strongly suggest loss is at the switch.

Note that it is considered perfectly correct for a switch to drop a multicast packet without any notification. Should anything bottleneck at the switch the switch would be in error to do anything else other than dismissing that data as if it never existed. I don’t know if there are any other ports connected on the switch, but if any port of the switch even touches a working physical layer of another node with a MAC address then all of that traffic load goes to that port regardless of whether someone on that other port is listening or not. An 8-port HUB with one device sending multicast and one device attached (notice I did not say it had to be receiving) uses a given amount of the switch’s bandwidth and computing power. Should the same switch have 8 ports where 7 are connected and one is sending multicast the switch’s internal work load would go up to 7 times the original amount. To be absolutely certain the switch is not involved would require throwing a network analyzer on the receiving computer’s cable and verifying the packets actually went out.

Incidentally, a big reason for multicast is that establishing individual UDP routes to thousands of end points would be extraordinarily higher use of resources to do the same thing as multicast. Multicast gets rid of much overhead, but can still consume more resources than a single UDP route within the switch or router.

Sometimes performance under high load is the reason why people are willing to spend a lot more money for a high end switch which otherwise looks to have the same specifications and features as a lower cost consumer switch.

Since ifconfig does not see the packet, yet there are no drops, errors, overruns, framing, nor collision errors, it looks like the switch is where the packets go missing (none of the other testing ever indicated any hardware failure, and both ends of the data transfer are known to have the ability to function correctly). If not the switch, then it implies the NIC hardware itself is losing the packets in such a way that the kernel can’t see it…this is unlikely since the same hardware works in other cases.

My thought is that it has to be the NIC. The switch tells me its sending all the packets. When another computer is hooked up to the switch, it sees all the packets. See the following:

sender computer --> switch ( says all packets sent to all users) --> tx 1 (gets 1/3 of packets, yet no drops, etc)
                         |
                         -------> laptop (see 100% of data, n)

The reverse process also works in all the permutations above. I know there are bandwidth issues with the TX1 NIC. What exactly are these issues?

I should add that about 10 years ago, I saw this issue with the HP rx1620 and rx2620s. They would drop a few packets every 5 minutes. The problem tracked down to be the ARP cache flush (secret_interval in /proc).

If i send you a receiver and sender code, can you run it on your bench to verify our issue?

I am not currently at a point where I can put much time into something like that, but possibly I could see if I find those issues between my Fedora host and a JTX1 (no guarantee on how fast I’d get to it). One thing to consider is that I have a different network switch. Since removing the switch and running direct seems to at least mitigate the problem it is questionable how much value there is in my own testing with the wrong switch.

Btw, it is generally ARP caching which does get in the way, especially on cheaper routers and switches. It’s hard to count the number of cases where people in online games have issues until they reboot a router or switch…and then the issue comes back a few days or a few weeks later…then reboot helps, and the issue repeats. The reboot clears the ARP cache.

You can actually attach files to a forum message after the message is posted, though I don’t know if a “.tar.gz2” or “.tgz” type extension is allowed or not. If you have a github account you could post there and give the github URL. On a message you’ve posted hover the mouse over the quote symbol at the upper right corner…a paper clip icon should then appear, and this is for attaching files to an existing message (sorry, can’t attach to a thread as it is authored, only after).

Thanks for the reply.

I will send you code. My goal is it will compile simply. If it works with your switch, then we will just use your switch. All the switches here reproduce the problem. I will post on github.

I was able to solve the issue.

I setup the system so that it was Sender --> Switch --> tx1.

I set up wireshark on the tx1 and send data from the Sender at 7MB/s. Analyzing the wireshark output showed that the 7MB/s was coming over at a low duty cycle, therefore, the switch was buffering up UDP packets and sending them out in one large burst which overwelmed the TX1. This is why direct connection worked (i.e. Sender --> tx1) because it sent the 7MB/s out over a ~50% duty cycle. The UDP packets from the switch are sent at a much lower duty cycle. The switch exchanges latency for bandwidth.

The solution was to turn on Flow control at the switch. This immediately solve the problem and raised the duty cycle to where the TX1 was not overwhelmed.