How can I get physical address of dma_addr_t in xavier

I allocate a dma buffer using function: virt = dma_alloc_coherent(cdev, size, &dma_handle, GFP_KERNEL). The dma_handle is the address for dma and it is not in the ddr memory region. How can I get the physical address of the dma buffer? thanks

Hi,

phys_addr_t phy = virt_to_phys(virt);

  • Manikanta

hi manikanta,
I try virt_to_phys and get the phy. I pass it to iommu_map which is bound to pcie bar0 and rc can not get correct data from bar0. But when I repalce dma_alloc_coherent with alloc_pages, it works. What is the prossible reason? thanks~

Hi,

dma_alloc_coherent() already provides you with IOMMU mapped address which is dma_handle, you don’t need to map it again. Can you share sample code to check if there is any issue.

  • Manikanta

What is the reason for getting to know the physical address? If it is for passing the same to some hardware engine, then, dma_handle (a.k.a IOVA) is what needs to be passed to hardware engine and not the physical address.

Hi manikanta, vidyas,
Function alloc_multi_page_bar0_mem in pci-epf-tegra-vnet.c file is used to alloc multi pages for bar0. It uses funciton alloc_pages and the size is limited to 4MB. I want to alloc a big memory, for example 256MB, so I use dma_alloc_coherent. I need physical address for bar0 iova. The code is as following, but it does not work.

int alloc_multi_page_bar0_mem(struct device *cdev, struct bar0_amap *amap)
{
struct iommu_domain *domain = iommu_get_domain_for_dev(cdev);
int ret = 0;

amap->virt  = dma_alloc_coherent(cdev, amap->size, &amap->phy, GFP_KERNEL);
if (!amap->virt) {
	dev_err(cdev, "%s: memory allocation failed...!\n", __func__);
	ret = -ENOMEM;
	goto fail_dma_alloc;
}

ret = iommu_map(domain, amap->iova, /*amap->phy*/virt_to_phys(amap->virt), amap->size, IOMMU_READ | IOMMU_WRITE);
if (ret < 0) {
	goto fail_vunmap;
}
dev_err(cdev, "%s: dma_addr=0x%llx dma_pa=0x%llx dma_va=%p\n", 
	__func__, amap->phy, virt_to_phys(amap->virt), amap->virt);

return 0;

fail_vunmap:
dma_free_coherent(cdev, amap->size, amap->virt, amap->phy);
fail_dma_alloc:
return ret;
}

Hi,

Tegra endpoint BAR is implemented in a different way,

  1. Driver allocated 4 MB of IOVA space with no backing memory(i.e no PHY address) using iommu_dma_alloc_iova().
  2. Later memory for each page is allocated and stitched to IOVA space in alloc_multi_page_bar0_mem() and alloc_single_page_bar0_mem()
    • alloc_pages() to allocated PHY memory
    • iommu_map() to map a known PHY address to a know IOVA address.

Here dma_alloc_coherent() doesn’t work because it doesn’t take PHY & IOVA as arguments. It allocates PHY and IOVA address by itself.

Are you increasing bar memory in pci-epf-tegra-vnet.c driver itself or writing your own driver?
If you are using pci-epf-tegra-vnet.c driver then increasing BAR memory need some work.

  • First you have convert BAR type to PCI_BASE_ADDRESS_MEM_PREFETCH and PCI_BASE_ADDRESS_MEM_TYPE_64 type.

  • Then increase the size of BAR

  • If any kernel crashes are observed then they need to be debugged.

  • Manikanta

I write my own driver and bar0 has been modified to 64bit and prefetch.
1.How Can I alloc a big memory for bar0 IOVA?
2.dma_alloc_coherent has its PHY and can I get it with virt_to_phys? If I can, why it does not work with bar0 IOVA?
3.Now I can only use bar0. Can I use bar2? If I can , how to configure it? (bar4 is used for dma register by default. )

Thanks~~

1.How Can I alloc a big memory for bar0 IOVA?
This is already answered in comment #7

2.dma_alloc_coherent has its PHY and can I get it with virt_to_phys? If I can, why it does not work with bar0 IOVA?
This is already answered in comment #7.

3.Now I can only use bar0. Can I use bar2? If I can , how to configure it? (bar4 is used for dma register by default. )
No, only bar0 can be used. bar2 will have MSI-X vector table.

  • Manikanta

hi Manikanta,
In comment #7, it only support 4MB. I use dma_alloc_coherent to allocate 256MB and get PHY for bar0 IOMMU,but it does work. How can I alloc 256BM memory for bar0? thanks

Hi,

Since SMMU is enabled, you have to use IOVA for bar not PHY. Please go through comment #7 again and you;ll know why dma_alloc_coherent is not working in this case.

Tell me your requirement in detail, then I can probably suggest you some alternatives.

  • Manikanta

Hi, manikanta,
I want to use IOMMU to map 4kB metainfo and 256MB dma buffer to BAR0 IOVA. IOMMU map function is as following:
iommu_map(domain, amap->iova, page_to_phys(amap->page), PAGE_SIZE, IOMMU_READ | IOMMU_WRITE);
So, it needs PHY to iommu map. For the 4KB metainfo, I can use page_to_phys(amap->page). For 256MB buffer, I find dma_alloc_coherent and I can get PHY with virt_to_phy. Any other method can help me to map 4KB metainfo and 256MB buffer to bar0? thanks

Hi,

BAR memory should be contiguous. However pci-epf-tegra-vnet.c driver need to map memory to BAR dynamically, so we chose the method described in comment #7 under “Tegra endpoint BAR is implemented in a different way”.

If you don’t need to map memory to BAR dynamically and if BAR memory is one time allocation then you can use dma_alloc_coherent() API. Allocate BAR memory in one shot and program IOVA using pci_epc_set_bar(). You shouldn’t use PHY because SMMU is enabled.

  • Manikanta

Hi,
I need to map memory to BAR dynamically, I need to map 4KB metainfo and 256MB dma buffer to Bar0. The difference is that my dma buffer is 256MB which can not be allocated by alloc_pages.

Allocate 4KB+256MB is one shot using dma_alloc_coherent.

  • Manikanta

And I need to map a region of register to bar0 too.

Hi,

virt = dma_alloc_coherent(…, size, &dma_handle, …);
pci_epc_set_bar(epc, BAR_0, dma_handle, size, PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_32);

Make sure host supports 256 MB of 32-bit non-prefetchable bar mapping. If not you have to go for 64-bit prefetchable bar mapping. i.e
pci_epc_set_bar(epc, BAR_0, dma_handle, size, PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64 | PCI_BASE_ADDRESS_MEM_PREFETCH);

  • Manikanta

Hi, manikanta,
I want to map 4KB metainfo + a region of xavier register + 256MB dma buffer to BAR0, so I need to use IOMMU. But I do not know how to allocate 256MB for IOMMU. thanks

Can I use the cma? If can, how to configure and use it in xavier? thanks~

Hi,

CMA is integrated to dma alloc APIs, like dma_alloc_coherent(). So this cannot be used.
You can use alloc_page in “for loop” to allocate 256MB and map it using iommu_map().
for (…) {
page = alloc_pages(GFP_KERNEL, 1);
iommu_map(domain, iova, page_to_phys(page), PAGE_SIZE, IOMMU_READ | IOMMU_WRITE);
virt = vmap(&page, 1, VM_MAP, PAGE_KERNEL);
}

  • Manikanta