Following Bringing up an Ethernet Interface over PCIe and using tvnet in order to check bulk transfer between Orin EP and PC RP, am getting very low speeds relative to theoretical for PCIe 4.0 x8 which is ~16GB/s or ~128Gb/s. Uplink and downlink depending on which side initiates transfer with iperf3 are only 259-481Mb/s.
From some other posts 40Gb/s should be expected, since virtual Ethernet transfer is less efficient than direct DMA calls (~128Gb/s), but measured speeds are 100x smaller than that. From logs, RP.eth0.log (6.2 KB)
lists LnkSta: Speed 8GT/s Width x8 and EP.eth1.log (6.3 KB) lists LnkSta: Speed 2.5GT/s Width x1, which seems odd, but even with PCIe 4.0 x1 2GB/s should be 16Gb/s.
Does virtual Ethernet use DMA and does it require patching Orin EP kernel, or PC RP kernal or tvnet, for PCIe or Ethernet buffer sizes etc.?
The referred topic is custom carrier board and physical NIC chip, so is that issue related to this topic, Dev Kit PCIe EP virtual Ethernet, by association because of the same underlying PCIe instability and not because how the tvnet driver is using PCIe? Do you have any noncommittal projection when the next release might be, or perhaps if there are any kernel patches that can be tried at the moment?
Any update on the expectation of a tentative official release date or perhaps availability of prerelease patch? There are two observations that may or may not be related.
The uplink and downlink speed while 2% of theoretical change 50% depending on which side initiates iperf3, is there a way to check raw PCIe EP/DMA speed without using tvnet driver?
The EP shows LnkSta: Speed 2.5GT/s (downgraded), Width x1 (ok) instead of x8 (PCIe connector width) on the PC RP side. Is the root cause same as referred topic, or is there something else as well that needs to be patched meanwhile in the kernel config or source code?