The upstream speed of PCIe is obviously nearly twice as slow as the Downstream speed, not reaching the theoretical speed it should be.
Considering resource scheduling, it will take some time before it is possible to verify that the conditions are the same with DevKit.
But in the past, when Devkit tested with endpoint over ethernet mode, the speed of transferring data from AGX to x86 was also significantly slower than the speed of transferring data from x86 to AGX.
Could you please give me some advice, or is there any possibility to adjust the speed in AGX PCIe Device Tree?
By checkign with internal team, the PCIe vnet transfer has limitation from TCP/IP structure, so if customer wants to have higher perf, they should use the CONFIG_PCIE_TEGRA_DW_DMA_TEST.
However, CONFIG_PCIE_TEGRA_DW_DMA_TEST is currently only supported to run between two Xaiver but not Xavier + x86 host.
Can you help to verify, how fast can you do the same test when both sides are AGX?
All my tests so far have used CONFIG_PCIE_TEGRA_DW_DMA_TEST to test the PCI DMA speed, except that I modified pci-epf-nv-test.c so that this driver can be installed on the x86 platform.