Hello NVIDIA experts, I have now upgraded to version 7.1, but after testing, I found that the single 25Gb connection is still around 11Gb, and the 4-link bond together is 33Gb. Previously, with version r38.2.1, the speed was able to reach over 40Gb. It was mentioned that this issue would be addressed in this release version, but the speed still remains at these levels after testing. Could you help explain why this is happening? Additionally, I have already set nvpmodel to the maxN mode, set jetson_clocks to the maximum frequency, and enabled the threaded multi-threading for MGBE. and udp is about 6Gbps.Besides the above, are there any additional configurations I need to make? Did I miss anything?
mgbe0_0
root@tegra-ubuntu:/etc/netplan# iperf3 -c 192.168.139.12 -b 25G -P 10
Connecting to host 192.168.139.12, port 5201
[ 5] local 192.168.139.13 port 43138 connected to 192.168.139.12 port 5201
[ 7] local 192.168.139.13 port 43142 connected to 192.168.139.12 port 5201
[ 9] local 192.168.139.13 port 43150 connected to 192.168.139.12 port 5201
[ 11] local 192.168.139.13 port 43156 connected to 192.168.139.12 port 5201
[ 13] local 192.168.139.13 port 43168 connected to 192.168.139.12 port 5201
[ 15] local 192.168.139.13 port 43170 connected to 192.168.139.12 port 5201
[ 17] local 192.168.139.13 port 43178 connected to 192.168.139.12 port 5201
[ 19] local 192.168.139.13 port 43184 connected to 192.168.139.12 port 5201
[ 21] local 192.168.139.13 port 43190 connected to 192.168.139.12 port 5201
[ 23] local 192.168.139.13 port 43206 connected to 192.168.139.12 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 93.9 MBytes 787 Mbits/sec 16 228 KBytes
[ 7] 0.00-1.00 sec 78.4 MBytes 657 Mbits/sec 14 236 KBytes
[ 9] 0.00-1.00 sec 162 MBytes 1.35 Gbits/sec 18 325 KBytes
[ 11] 0.00-1.00 sec 216 MBytes 1.81 Gbits/sec 11 290 KBytes
[ 13] 0.00-1.00 sec 130 MBytes 1.09 Gbits/sec 13 238 KBytes
[ 15] 0.00-1.00 sec 121 MBytes 1.02 Gbits/sec 20 312 KBytes
[ 17] 0.00-1.00 sec 165 MBytes 1.38 Gbits/sec 16 269 KBytes
[ 19] 0.00-1.00 sec 186 MBytes 1.56 Gbits/sec 17 384 KBytes
[ 21] 0.00-1.00 sec 98.2 MBytes 824 Mbits/sec 17 195 KBytes
[ 23] 0.00-1.00 sec 132 MBytes 1.11 Gbits/sec 10 260 KBytes
[SUM] 0.00-1.00 sec 1.35 GBytes 11.6 Gbits/sec 152
Okay, I previously received feedback saying that the next version after 7.0 would fix it, so I waited for 7.1, but I found that there hasn’t been any change.
Hello, I would like to ask another question. I don’t need the QSFP for camera-related tasks, and only want to use it for general network purposes. I noticed that some of the dmesg messages are related to camera-related drivers. In my case, if I disable the mgbe and camera-related components, will it improve the network speed?
[ 14.576655] camrtc-coe tegra-capture-coe0: Camera Over Eth controller a808a10000.ethernet num_chans=5 IRQ=4
[ 15.103475] camrtc-coe tegra-capture-coe1: netdev event 5 dev mgbe1_0
[ 15.110623] camrtc-coe tegra-capture-coe1: Ch4->PDMA4
[ 15.114749] camrtc-coe tegra-capture-coe1: Ch5->PDMA5
[ 15.119950] camrtc-coe tegra-capture-coe1: Ch6->PDMA6
[ 15.124832] camrtc-coe tegra-capture-coe1: Ch7->PDMA7
[ 15.130064] camrtc-coe tegra-capture-coe1: Ch8->PDMA7
[ 15.135072] camrtc-coe tegra-capture-coe1: Camera Over Eth controller a808b10000.ethernet num_chans=5 IRQ=4
[ 15.144938] camrtc-coe tegra-capture-coe2: netdev event 5 dev mgbe2_0
[ 15.151428] camrtc-coe tegra-capture-coe2: Ch4->PDMA4
[ 15.156259] camrtc-coe tegra-capture-coe2: Ch5->PDMA5
[ 15.161532] camrtc-coe tegra-capture-coe2: Ch6->PDMA6
[ 15.161750] camrtc-coe tegra-capture-coe2: Ch7->PDMA7
[ 15.161795] camrtc-coe tegra-capture-coe2: Ch8->PDMA7
[ 15.161952] camrtc-coe tegra-capture-coe2: Camera Over Eth controller a808d10000.ethernet num_chans=5 IRQ=4
[ 15.186492] camrtc-coe tegra-capture-coe3: netdev event 5 dev mgbe3_0
[ 15.192987] camrtc-coe tegra-capture-coe3: Ch4->PDMA4
[ 15.197844] camrtc-coe tegra-capture-coe3: Ch5->PDMA5
[ 15.203054] camrtc-coe tegra-capture-coe3: Ch6->PDMA6
[ 15.207955] camrtc-coe tegra-capture-coe3: Ch7->PDMA7
[ 15.213184] camrtc-coe tegra-capture-coe3: Ch8->PDMA7
[ 15.218151] camrtc-coe tegra-capture-coe3: Camera Over Eth controller a808e10000.ethernet num_chans=5 IRQ=4
After you blacklist the camera-related drivers, is there any indication that the mgbe rate has improved compared to before? What is your maximum rate now?
# Following to confirm each mgbe link is at 25G individually.
ip -br link
ethtool <ifname> | egrep -i 'Speed|Duplex|Auto-neg'
ethtool -S <ifname>
# If using LACP
cat /proc/net/bonding/bond0
echo layer3+4 | sudo tee /sys/class/net/bond0/bonding/xmit_hash_policy
echo fast | sudo tee /sys/class/net/bond0/bonding/lacp_rate
On my end, the xmit_hash_policy is already set to layer3+4, but the lacp_rate isn’t set to fast—I checked and it’s currently slow. However, this seems to be the negotiation rate, which is the response speed for link aggregation detection, so it probably doesn’t affect the iperf throughput rate. It feels more like the polling interval for checking the status of each aggregated port. I’ll try changing it to fast later and compare the results. Thank you very much!
Hi wpceswpces, and set bond mtu equal to members, or pick a higher MTU that switch supports.
sudo ip link set dev bond0 mtu 1466
sudo ip link set dev mgbe0_0 mtu 1466
sudo ip link set dev mgbe1_0 mtu 1466
sudo ip link set dev mgbe2_0 mtu 1466
sudo ip link set dev mgbe3_0 mtu 1466