Xavier PCIe share memory access issue

I’m facing Xavier PCIe communication issue between two Xavier(Endpoint & Rootport system).

I applied both endpoint and rootport patch from the following link to increase the memory size from 64KB to 512MB.

Xavier pcie endpoint share memory size

And I followed the below procedure

  1. Boot EP system
  2. Configure and enable the PCIe endpoint mode
  3. Boot RP system

In endpoint side, I got the IOVM address

[ 2118.360255] pci_epf_nv_test pci_epf_nv_test.0: BAR0 RAM IOVA: 0xe0000000

But PCIe is not enumerated in the hostport side and got the EP deinit call in enpoint side

[ 160.650958] tegra-pcie-dw 141a0000.pcie_ep: EP init done
[ 161.217341] tegra-pcie-dw 141a0000.pcie_ep: EP deinit done

After I disabled the PCIe ASPM in both EP and RP system, PCIe is enumerated and memory is allocated in Rootport side.

0005:01:00.0 RAM memory: NVIDIA Corporation Device 0001
	Flags: fast devsel, IRQ 255
	Memory at 1f40000000 (32-bit, prefetchable) [size=512M]
	Memory at 1c00000000 (64-bit, prefetchable) [size=128K]
	Memory at 1f60000000 (64-bit, non-prefetchable) [size=1M]
	Capabilities: <access denied>

Can you help me to understand, why we need to disable the pcie_aspm to enumerate the PCIe is both system?

And how to access the Virtual Address in user space from the EP side to read/write the data from/to the shared memory?

Note:
I also posted the Xavier PCIe 64KB shared memory issue in the following topic

Issue in PCIe communication between two Xavier(endpoint & rootport system)

Regards,
Bala

Since the memory is allocated using dma_alloc_coherent(), you need to use remap_pfn_range() API to export this region to the userspace. Any good documentation on remap_pfn_range() API would do.
Please go through the below notes as well.


There are times that call for exposing resources from the kernel space to the user space for various reasons of which performance is one. The resources here include the following

  1. MMIO regions (Including the PCIe BARs that get mapped into host system’s MMIO regions)
  2. Memory allocations done by kernel drivers

In either case, the API to be used is remap_pfn_range(). As one of the input arguments, this API needs the physical address and the size of the region which needs to be exposed to the user space.

In case of (1), it is straightforward as the physical address location is known directly.

In case of (2), most of the time, memory is allocated using DMA APIs like dma_alloc_coherent() etc… where only the CPU VA (a.k.a kernel virtual address) is known and not the physical address. To get the physical address from the kernel virtual address, there are again two ways that can be used.

Before jumping into both the methods, please do note that it might be tempting to use the macro __pa() for this purpose, but it may not work. It may work on x86 systems but It surely doesn’t work on ARM systems as the the kernel’s logical address is not going to be same as the kernel virtual address (in which case__pa() works and it generally is the case in case of x86 systems)

Method-1: (Getting the physical address from VA (a.k.a CPU-VA))

The API vmalloc_to_pfn() can be used to get the pfn directly from the CPU-VA

Method-2: (Getting the physical address from IOVA)

This is a two step process. First use iommu_get_domain_for_dev() to get the IOMMU domain of the device and then use iommu_iova_to_phys() to get the physical address.


Thanks for sharing the detailed explanation.
I’ll check and update you

Regards,
Bala