The default of data size for DMA test is 255MB(BAR0_SIZE - 1MB), and when I try to change the BAR0_SIZE to 1GB to do a larger data size for DMA testing, it fails. The largest BAR0_SIZE I’ve tried to do DMA test successfully is 512MB. Is there any way I can start DMA testing if the data size is greater than 1GB?
How can I change the max payload size of PCIe?
Does Jetson AGX Xavier can support the max payload size over 256 bytes?
I launched a test that ran other commands to occupy the CPU usage(up to 100%) and then triggered the PCIe RP DMA transmission for testing.
However, the heavy CPU loading does not impact the DMA transmission(PCIe Gen.4 x2, all around 10Gbps ~ 11Gbps).
Do you have any idea why the heavy CPU loading has not impacted the PCIe DMA transmission?
For setting the max payload. Previously some other users using setpci to configure it.
example:
sudo setpci -s 0005:00:00.0 74.w (device capabilities register for x4)
sudo setpci -s 0005:00:00.0 78.w ( Device Control register for x4) and write value for bits 7:5 as (001b) for 256 Bytes MPS
But the actual MPS still need to check with lspci command.
I tried using the command “setpci” to configure the max payload for the PCIe root port (0005:00:00.0), but this only applied to the PCIe root port, not the endpoint. Do I also need to configure the max payload for the endpoint?
Do you have any idea about the issue I posted above?:
I have tried the Jetson AGX Orin platform, and the PCIe RP DMA transfer performance is almost twice more than Xavier.
If this situation has nothing to do with CPU usage or performance, what are your thoughts on what causes this result?
If you are using the DMA test driver developed by Nvidia. In this case, we don’t have any CPU intensive tasks, we are just doing DMA transfers and detecting DMA completion based on interrupt.
If same DMA driver is used in Xavier, then I expected same performance. Please compare PCIe link speed and width from lspci -vvv and see if they are same.