QSFP can not reach 25Gbps ideal speed

root@tegra-ubuntu:/home/tj_software# ethtool mgbe0_0
Settings for mgbe0_0:
        Supported ports: [ TP    MII ]
        Supported link modes:   Not reported
        Supported pause frame use: Symmetric Receive-only
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  Not reported
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Speed: 25000Mb/s
        Duplex: Full
        Auto-negotiation: on
        Port: MII
        PHYAD: 0
        Transceiver: external
        Supports Wake-on: d
        Wake-on: d
        Current message level: 0x00000000 (0)

        Link detected: yes

Sometimes I can only reach a bit over 10G.

I have now aggregated the links into one, and it shows negotiated to 100G, but the actual throughput is still 10G.

What L4T version, 38.2.1 or 38.2.2? I have had mixed results with mine too.

Have you done this after boot:

echo 1 > /sys/devices/platform/bus@0/a808a10000.ethernet/net/mgbe0_0/threaded
echo 1 > /sys/devices/platform/bus@0/a808e10000.ethernet/net/mgbe3_0/threaded
echo 1 > /sys/devices/platform/bus@0/a808b10000.ethernet/net/mgbe1_0/threaded
echo 1 > /sys/devices/platform/bus@0/a808d10000.ethernet/net/mgbe2_0/threaded

hi,johnathon1

no,I didn’t configure this. Does the rate return to normal after configuring it?

I found that I was getting 25Gbe aggregate throughput with NAPI thread enabled but it’s not persistent, needs injecting after each reboot or a udev rule.

I have it in a boot script. And only works on on L4T 38.2.1 with custom kernel for me. When I tried 38.2.2 custom kernel couldn’t get any data through the mgbe nic’s at speed 25000. Oooof!

Make sure you run them as sudo, put in a script etc.

Details here, it seems to enable threads rather than software interrupts but that makes me think the mgbe driver isn’t fully offloading processing or maybe I’m wrong. What the case the mgbe driver feels a bit hokey:

Hi,johnathon1

Wow, 666! So when you reached 25Gbps based on r38.2.1, can LACP reach 100G? I’d like to try that too.

I haven’t experimented with bonding on the mgbe interfaces yet. I’m still trying to get 38.2.2 custom kernel to work

Good luck, looking forward to your good news

It feels like NAPI has enabled kernel multi-threading. I tested with iperf3 -P 14 during traffic generation, but I found that user-space multi-threading didn’t improve performance, and UDP was even lower than TCP.

Try pulling your parallels with iperf3 back to -P 8, I got more consistent results around there after testing with -P 20 in either direction. I think the CPU bus can get swamped. But wpceswpces I didn’t test with UDP, only TCP but gave up as needed to move on to 38.2.2 as I need nvidia-jetpack etc.

Alright, I’m really looking forward to JetPack 7.1. I hope these issues will improve, and it would be great if it could be released soon.

1 Like

你好,我想请问您是如何修改为25G速率的,我在参考文档时发现并没有找到tegra264-bpmp-3834-0008-4071-xxxx.dts文件,我下载的版本为38.2.1

我是反汇编出dts改的

1 Like

Hi,johnathon1

I have enabled echo 1 > /sys/devices/platform/bus@0/a808a10000.ethernet/net/mgbe0_0/threaded on both the server and client side, but the iperf3 traffic still cannot reach 100G.

iperf3.txt (18.2 KB)

1 Like

found that when the two devices are directly connected, they can link up at 100G. However, when they are plugged into the switch separately, the physical link cannot come up. Checking with ethtool shows that the switch ports are also in a down state. But if I just insert the fiber cable and the two modules into the corresponding switch ports to make a loopback, the link can come up normally at 100G.

1 Like

Wow your results are worse! Are you sure it’s going out via the right NIC? To avoid default gateway I’ll normally bind iperf/iperf3 to a specific IP with –-bind <>

Also make sure your switch has jumbo frames enabled and you’ve explicitly set the MTU in netplan (don’t use network manager) at 9100 etc. I also prefer old iperf for jumbo frame flooding and testing LAGs with IPv4.

In any case, the mgbe driver situation in 38.2.1 and 38.2.2 feels like a mess and I agree with your previous comment about needing Jetpack 7.1.

yeah,i have only 12.7Gbps…