Please provide the following info (check/uncheck the boxes after creating this topic):
Software Version
DRIVE OS Linux 5.2.6
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other
Target Operating System
Linux
QNX
other
Hardware Platform
NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
other
SDK Manager Version
1.6.0.8170
other 1.1.0-6343
Host Machine Version
native Ubuntu 18.04
other
We found very huge ammont of TX packets on XavierA[and B] eth0 by using ifconfig.
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet XXX.XXX.XXX.XXX netmask 255.255.255.0 broadcast XXX.XXX.XXX.255
ether YY:YY:YY:YY:YY:YY txqueuelen 1000 (Ethernet)
RX packets 32080460 bytes 47398494975 (47.3 GB)
RX errors 0 dropped 35198 overruns 0 frame 0
TX packets 25123307 bytes 22468378556962222 (22.4 PB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
The value of tx bytes becomes greater by some GBs per 5 seconds even if any our aplications don’t run.
The huge amount of transmission stops when we bring down “eth0.200”.
Would you tell us the reason of the huge transmission?
Doesn’t the transmission disturb other transmissions using eth0 extremely?
Hardware information:
DRIVE AGX System (E3550)
1GbE J14(3&4) - Dual GbE dongle(E3579) - External Switch (Supported 803.x flow control)
VickNV
August 31, 2021, 8:10pm
3
Tabito.Suzuki:
1GbE J14(3&4)
Hi @Tabito.Suzuki ,
According to Front Panel , 3&4 isn’t 1 GbE port. Please clarify which port you observe the issue.
Does it also happen on DRIVE 5.2.6?
Dear @VickNV
According to Front Panel, 3&4 isn’t 1 GbE port. Please clarify which port you observe the issue.
3&4 “is” 1 GbE port on our AGX.
We actually tested our AGX 3&4 port by iperf3.
|XavierA|
[ 4] local 157.79.237.124 port 25306 connected to 157.79.237.120 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 115 MBytes 961 Mbits/sec 0 684 KBytes
[ 4] 1.00-2.00 sec 112 MBytes 940 Mbits/sec 6 399 KBytes
[ 4] 2.00-3.00 sec 112 MBytes 940 Mbits/sec 2 396 KBytes
[ 4] 3.00-4.00 sec 112 MBytes 939 Mbits/sec 4 354 KBytes
[ 4] 4.00-5.00 sec 112 MBytes 940 Mbits/sec 0 496 KBytes
[ 4] 5.00-6.00 sec 112 MBytes 940 Mbits/sec 6 417 KBytes
[ 4] 6.00-7.00 sec 112 MBytes 940 Mbits/sec 2 397 KBytes
[ 4] 7.00-8.00 sec 112 MBytes 940 Mbits/sec 2 443 KBytes
[ 4] 8.00-9.00 sec 112 MBytes 940 Mbits/sec 2 403 KBytes
[ 4] 9.00-10.00 sec 112 MBytes 940 Mbits/sec 0 498 KBytes
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 1.10 GBytes 942 Mbits/sec 24 sender
[ 4] 0.00-10.00 sec 1.09 GBytes 939 Mbits/sec receiver
|XavierB|
[ 4] local 157.79.237.123 port 29546 connected to 157.79.237.120 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 115 MBytes 961 Mbits/sec 0 714 KBytes
[ 4] 1.00-2.00 sec 112 MBytes 940 Mbits/sec 6 458 KBytes
[ 4] 2.00-3.00 sec 112 MBytes 940 Mbits/sec 2 469 KBytes
[ 4] 3.00-4.00 sec 112 MBytes 940 Mbits/sec 4 482 KBytes
[ 4] 4.00-5.00 sec 112 MBytes 939 Mbits/sec 1 472 KBytes
[ 4] 5.00-6.00 sec 112 MBytes 937 Mbits/sec 4 359 KBytes
[ 4] 6.00-7.00 sec 112 MBytes 940 Mbits/sec 0 496 KBytes
[ 4] 7.00-8.00 sec 112 MBytes 939 Mbits/sec 2 499 KBytes
[ 4] 8.00-9.00 sec 112 MBytes 940 Mbits/sec 4 499 KBytes
[ 4] 9.00-10.00 sec 112 MBytes 939 Mbits/sec 2 510 KBytes
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 1.10 GBytes 942 Mbits/sec 25 sender
[ 4] 0.00-10.00 sec 1.09 GBytes 939 Mbits/sec receiver
When we connect to 1&2 Port, The bandwidth measured by iperf3 is only 100Mbps.
The bandwidth problem may be the same as the following issue.
Hello, how can I change the speed of the HSD Ethernet ports? Most seem to be configured as 100 MBit/s. Just one is configured with 1GBit/s.
Does it also happen on DRIVE 5.2.6?
We haven’t tried yet.
We can’t change OS version easily, since the change would bring several effects to our project.
VickNV
September 1, 2021, 2:48pm
5
Please share a picture of how is the connection for our reproducing.
Dear @VickNV
Our AGX connection is as follows.
The blue LAN cable leads to our intranet.
VickNV
September 9, 2021, 1:17am
7
I didn’t see the issue under the same setup.
Is it observable under default settings (right after flashed)?
nvidia@tegra-ubuntu:~$ ifconfig
enP4p1s0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 00:04:4b:a4:e4:d6 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp4s0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 00:04:4b:a4:e4:d4 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.185 netmask 255.255.255.0 broadcast 192.168.0.255
inet6 fe80::204:4bff:fea4:e4d1 prefixlen 64 scopeid 0x20
ether 00:04:4b:a4:e4:d1 txqueuelen 1000 (Ethernet)
RX packets 1672 bytes 160709 (160.7 KB)
RX errors 0 dropped 480 overruns 0 frame 0
TX packets 378 bytes 1125281505238 (1.1 TB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0.200: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.42.0.28 netmask 255.255.255.0 broadcast 10.42.0.255
inet6 fe80::204:4bff:fea4:e4d1 prefixlen 64 scopeid 0x20
ether 00:04:4b:a4:e4:d1 txqueuelen 1000 (Ethernet)
RX packets 765 bytes 84657 (84.6 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 262 bytes 65640 (65.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1000 (Local Loopback)
RX packets 195 bytes 89830 (89.8 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 195 bytes 89830 (89.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0