Jetson agx xavier inbound configuration of PCIe(root)

Dear Nvidia Support Team,

I am currently working with the PCI interface, utilizing the Jetson AGX Xavier development kit as the root device and Xilinx’s AXI to PCI bridge IP as the endpoint. My goal is to establish communication between the root and endpoint, allowing data transfer in both directions.

At present, I have successfully enabled four PCIe bars and two AXI2PCI bars within the IP configuration. By running the “lspci -vvv” command on the development board, I can confirm that the bars are correctly initialized. Furthermore, I am able to read from and write to the endpoint memory.

However, I am encountering an issue when attempting to access the root memory from the endpoint. I have configured the root memory address within the AXI2PCI bar. Unfortunately, when I attempt to read or write to this AXI address, I am consistently receiving the value 0xFFFFFFFF, which is not the expected behavior.

It’s worth noting that the same endpoint has been successfully connected to an Intel kontron COMeboard, and I am able to read from and write to the memory on that board without any issues.

To help diagnose this issue, I have attached comparison logs for both the Nvidia and Intel configurations. The attached files are as follows:

Intel_and_Nvidia_compare_log.pdf (307.6 KB)
Intel_and_Nvidia_hex_dump_cmpre.pdf (177.7 KB)

Your assistance in resolving this matter is highly appreciated. If you could kindly review the attached files and provide guidance on how to address this issue, it would be of great help.

Thank you in advance for your support.

Sorry for the late response, is this still an issue to support? Thanks

Yes kayccc, Iam facing this issue till now.

Do you have a log or console result for what you did?

Any lspci -vvv result to share? And any application that you are running could be shared?

attached .pdf files contains lspci -vvv results for both nvidia and intel. pdf file is just a comparision between lspci-vvv results of both nvidia and intel. Iam running xilinx pcie_dma_driver in nvidia zetson.

Please review the attached pdf document. I have attached all the information’s required.


We need following details,

  1. Uart or dmesg logs.
  2. Which driver and how is it allocating memory in RP?
  3. Run following script and provide output.
for g in /sys/kernel/iommu_groups/*; do
    echo "IOMMU Group ${g##*/}:"

    for d in $g/devices/*; do
        echo -e "\t$(lspci -nns ${d##*/})"

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.