AGX Endpoint PCIe DMA speed

Hi

We follow below website patch test AGX endpoint mode pcie speed with DMA.

AGX change endpoint mode insert to x86 PCIe x4 slot , follow website run “cat write” and “cat read” test on x86 platfom.

Result:
DMA Write: 536870912 bytes / 75674797 ns = 7.0944 GBytes/sec
DMA Read : 536870912 bytes / 136540673 ns = 3.9319 GBytes/sec

The upstream speed of PCIe is obviously nearly twice as slow as the Downstream speed, not reaching the theoretical speed it should be.

Considering resource scheduling, it will take some time before it is possible to verify that the conditions are the same with DevKit.

But in the past, when Devkit tested with endpoint over ethernet mode, the speed of transferring data from AGX to x86 was also significantly slower than the speed of transferring data from x86 to AGX.

Could you please give me some advice, or is there any possibility to adjust the speed in AGX PCIe Device Tree?

Best Regards
Jack Land

Looks like this is a expected speed when AGX Xavier is connecting with x86.
We will do further investigation to share.

By checkign with internal team, the PCIe vnet transfer has limitation from TCP/IP structure, so if customer wants to have higher perf, they should use the CONFIG_PCIE_TEGRA_DW_DMA_TEST.
However, CONFIG_PCIE_TEGRA_DW_DMA_TEST is currently only supported to run between two Xaiver but not Xavier + x86 host.

Hi Kayccc

Can you help to verify, how fast can you do the same test when both sides are AGX?

All my tests so far have used CONFIG_PCIE_TEGRA_DW_DMA_TEST to test the PCI DMA speed, except that I modified pci-epf-nv-test.c so that this driver can be installed on the x86 platform.

Best Regards
Jack Land

By checking with team, according to the bandwidth formula, we got:
Write 13.52GB/s
Read 4.625 GB/s

Hi Kayccc

It seems that Read speed is clearly not standard in PCIe x8, is this the correct situation?

Hi Kayccc

Is it possible to improve the read speed in any way?

Sorry, currently no method.