Assignment of PCIe Device Memory BAR fails

Hi,

I’m struggling to get the memory BAR of a PCIe endpoint device assigned (size is indeed 2GB). The Jetson AGX Xavier is the root complex in my topology, connected to the endpoint device through two PCIe switches. The switches itself also have a memory BAR that are not assigned either.

As you’ll see below, the assignment of the BARs fail with the following messages:

pci 0005:00:00.0: BAR 14: no space for [mem size 0xe0000000]
pci 0005:00:00.0: BAR 14: failed to assign [mem size 0xe0000000]
pci 0005:01:00.0: BAR 14: no space for [mem size 0xc0000000]
pci 0005:01:00.0: BAR 14: failed to assign [mem size 0xc0000000]
pci 0005:01:00.0: BAR 0: no space for [mem size 0x00040000]
pci 0005:01:00.0: BAR 0: failed to assign [mem size 0x00040000]
pci 0005:02:01.0: BAR 14: no space for [mem size 0xc0000000]
pci 0005:02:01.0: BAR 14: failed to assign [mem size 0xc0000000]
pci 0005:03:00.0: BAR 14: no space for [mem size 0x80000000]
pci 0005:03:00.0: BAR 14: failed to assign [mem size 0x80000000]
pci 0005:03:00.0: BAR 0: no space for [mem size 0x00040000]
pci 0005:03:00.0: BAR 0: failed to assign [mem size 0x00040000]
pci 0005:04:00.0: BAR 14: no space for [mem size 0x80000000]
pci 0005:04:00.0: BAR 14: failed to assign [mem size 0x80000000]
pci 0005:05:00.0: BAR 0: no space for [mem size 0x80000000]
pci 0005:05:00.0: BAR 0: failed to assign [mem size 0x80000000]
dmesg log of the enumeration & assignment process
[    3.388710] tegra-pcie-dw 141a0000.pcie: Setting init speed to max speed
[    3.389781] OF: PCI: host bridge /pcie@141a0000 ranges:
[    3.389791] OF: PCI:    IO 0x3a100000..0x3a1fffff -> 0x3a100000
[    3.389796] OF: PCI:   MEM 0x1f40000000..0x213fffffff -> 0x40000000
[    3.389799] OF: PCI:   MEM 0x1c00000000..0x1dffffffff -> 0x1c00000000
[    3.499681] tegra-pcie-dw 141a0000.pcie: link is up
[    3.499890] tegra-pcie-dw 141a0000.pcie: PCI host bridge to bus 0005:00
[    3.499895] pci_bus 0005:00: root bus resource [bus 00-ff]
[    3.499900] pci_bus 0005:00: root bus resource [io  0x300000-0x3fffff] (bus address [0x3a100000-0x3a1fffff])
[    3.499904] pci_bus 0005:00: root bus resource [mem 0x1f40000000-0x213fffffff] (bus address [0x40000000-0x23fffffff])
[    3.499907] pci_bus 0005:00: root bus resource [mem 0x1c00000000-0x1dffffffff pref]
[    3.499940] pci 0005:00:00.0: [10de:1ad0] type 01 class 0x060400
[    3.500068] pci 0005:00:00.0: PME# supported from D0 D3hot D3cold
[    3.500227] iommu: Adding device 0005:00:00.0 to group 66
[    3.500570] pci 0005:01:00.0: [10b5:8718] type 01 class 0x060400
[    3.500646] pci 0005:01:00.0: reg 0x10: [mem 0x00000000-0x0003ffff]
[    3.501325] pci 0005:01:00.0: PME# supported from D0 D3hot D3cold
[    3.501633] iommu: Adding device 0005:01:00.0 to group 67
[    3.511896] pci 0005:01:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    3.512290] pci 0005:02:01.0: [10b5:8718] type 01 class 0x060400
[    3.513043] pci 0005:02:01.0: PME# supported from D0 D3hot D3cold
[    3.513387] iommu: Adding device 0005:02:01.0 to group 68
[    3.513759] pci 0005:02:01.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    3.514128] pci 0005:03:00.0: [10b5:8714] type 01 class 0x060400
[    3.514208] pci 0005:03:00.0: reg 0x10: [mem 0x00000000-0x0003ffff]
[    3.514897] pci 0005:03:00.0: PME# supported from D0 D3hot D3cold
[    3.515320] iommu: Adding device 0005:03:00.0 to group 69
[    3.515866] pci 0005:03:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    3.516230] pci 0005:04:00.0: [10b5:8714] type 01 class 0x060400
[    3.517002] pci 0005:04:00.0: PME# supported from D0 D3hot D3cold
[    3.517325] iommu: Adding device 0005:04:00.0 to group 70
[    3.517476] pci 0005:04:01.0: [10b5:8714] type 01 class 0x060400
[    3.518235] pci 0005:04:01.0: PME# supported from D0 D3hot D3cold
[    3.518552] iommu: Adding device 0005:04:01.0 to group 71
[    3.518853] pci 0005:04:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    3.518894] pci 0005:04:01.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    3.519233] pci 0005:05:00.0: [abcd:001d] type 00 class 0x040000
[    3.519319] pci 0005:05:00.0: reg 0x10: [mem 0x00000000-0x7fffffff]
[    3.520029] iommu: Adding device 0005:05:00.0 to group 72
[    3.531956] pci_bus 0005:05: busn_res: [bus 05-ff] end is updated to 05
[    3.532239] pci_bus 0005:06: busn_res: [bus 06-ff] end is updated to 06
[    3.532259] pci_bus 0005:04: busn_res: [bus 04-ff] end is updated to 06
[    3.532277] pci_bus 0005:03: busn_res: [bus 03-ff] end is updated to 06
[    3.532295] pci_bus 0005:02: busn_res: [bus 02-ff] end is updated to 06
[    3.533177] pci 0005:00:00.0: BAR 14: no space for [mem size 0xe0000000]
[    3.533180] pci 0005:00:00.0: BAR 14: failed to assign [mem size 0xe0000000]
[    3.533186] pci 0005:01:00.0: BAR 14: no space for [mem size 0xc0000000]
[    3.533189] pci 0005:01:00.0: BAR 14: failed to assign [mem size 0xc0000000]
[    3.533203] pci 0005:01:00.0: BAR 0: no space for [mem size 0x00040000]
[    3.533206] pci 0005:01:00.0: BAR 0: failed to assign [mem size 0x00040000]
[    3.533209] pci 0005:02:01.0: BAR 14: no space for [mem size 0xc0000000]
[    3.533211] pci 0005:02:01.0: BAR 14: failed to assign [mem size 0xc0000000]
[    3.533214] pci 0005:03:00.0: BAR 14: no space for [mem size 0x80000000]
[    3.533217] pci 0005:03:00.0: BAR 14: failed to assign [mem size 0x80000000]
[    3.533219] pci 0005:03:00.0: BAR 0: no space for [mem size 0x00040000]
[    3.533222] pci 0005:03:00.0: BAR 0: failed to assign [mem size 0x00040000]
[    3.533225] pci 0005:04:00.0: BAR 14: no space for [mem size 0x80000000]
[    3.533227] pci 0005:04:00.0: BAR 14: failed to assign [mem size 0x80000000]
[    3.533230] pci 0005:05:00.0: BAR 0: no space for [mem size 0x80000000]
[    3.533233] pci 0005:05:00.0: BAR 0: failed to assign [mem size 0x80000000]
[    3.533236] pci 0005:04:00.0: PCI bridge to [bus 05]
[    3.533306] pci 0005:04:01.0: PCI bridge to [bus 06]
[    3.533376] pci 0005:03:00.0: PCI bridge to [bus 04-06]
[    3.533446] pci 0005:02:01.0: PCI bridge to [bus 03-06]
[    3.533513] pci 0005:01:00.0: PCI bridge to [bus 02-06]
[    3.533581] pci 0005:00:00.0: PCI bridge to [bus 01-ff]
[    3.533597] pci 0005:00:00.0: Max Payload Size set to  256/ 256 (was  256), Max Read Rq  512
[    3.533636] pci 0005:01:00.0: Max Payload Size set to  256/2048 (was  128), Max Read Rq  128
[    3.533675] pci 0005:02:01.0: Max Payload Size set to  256/2048 (was  128), Max Read Rq  128
[    3.533715] pci 0005:03:00.0: Max Payload Size set to  256/1024 (was  128), Max Read Rq  128
[    3.533755] pci 0005:04:00.0: Max Payload Size set to  256/1024 (was  128), Max Read Rq  128
[    3.533797] pci 0005:05:00.0: Max Payload Size set to  256/1024 (was  128), Max Read Rq  512
[    3.533837] pci 0005:04:01.0: Max Payload Size set to  256/1024 (was  128), Max Read Rq  128
[    3.534011] pcieport 0005:00:00.0: Signaling PME through PCIe PME interrupt
[    3.534014] pci 0005:01:00.0: Signaling PME through PCIe PME interrupt
[    3.534016] pci 0005:02:01.0: Signaling PME through PCIe PME interrupt
[    3.534017] pci 0005:03:00.0: Signaling PME through PCIe PME interrupt
[    3.534019] pci 0005:04:00.0: Signaling PME through PCIe PME interrupt
[    3.534022] pci 0005:05:00.0: Signaling PME through PCIe PME interrupt
[    3.534024] pci 0005:04:01.0: Signaling PME through PCIe PME interrupt
[    3.534028] pcie_pme 0005:00:00.0:pcie001: service driver pcie_pme loaded
[    3.534172] aer 0005:00:00.0:pcie002: service driver aer loaded

Is it the device tree that need to be adjusted to provide more space?
My current setting of the relevant device tree node is as follows:

Device Tree Node pcie@141a0000
pcie@141a0000 {
                compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
                power-domains = <0x3 0x11>;
                reg = <0x0 0x141a0000 0x0 0x20000 0x0 0x3a000000 0x0 0x40000 0x0 0x3a040000 0x0 0x40000>;
                reg-names = "appl", "config", "atu_dma";
                status = "okay";
                #address-cells = <0x3>;
                #size-cells = <0x2>;
                device_type = "pci";
                num-lanes = <0x8>;
                linux,pci-domain = <0x5>;
                clocks = <0x4 0xe1 0x4 0x144>;
                clock-names = "core_clk", "core_clk_m";
                resets = <0x5 0x82 0x5 0x81>;
                reset-names = "core_apb_rst", "core_rst";
                interrupts = <0x0 0x35 0x4 0x0 0x36 0x4>;
                interrupt-names = "intr", "msi";
                pinctrl-names = "pex_rst", "clkreq";
                pinctrl-0 = <0x1a>;
                pinctrl-1 = <0x8>;
                iommus = <0x2 0x5b>;
                dma-coherent;
                #interrupt-cells = <0x1>;
                interrupt-map-mask = <0x0 0x0 0x0 0x0>;
                interrupt-map = <0x0 0x0 0x0 0x0 0x1 0x0 0x35 0x4>;
                nvidia,dvfs-tbl = <0xc28cb00 0xc28cb00 0xc28cb00 0x18519600 0xc28cb00 0xc28cb00 0x18519600 0x27b25a80 0xc28cb00 0x18519600 0x27b25a80 0x3f89de80 0x18519600 0x27b25a80 0x3f89de80 0x7f22ff40>;
                nvidia,max-speed = <0x4>;
                nvidia,disable-aspm-states = <0xf>;
                nvidia,controller-id = <0x3 0x5>;
                nvidia,tsa-config = <0x200b004>;
                nvidia,disable-l1-cpm;
                nvidia,aux-clk-freq = <0x13>;
                nvidia,preset-init = <0x5>;
                nvidia,aspm-cmrt = <0x3c>;
                nvidia,aspm-pwr-on-t = <0x14>;
                nvidia,aspm-l0s-entrance-latency = <0x3>;
                bus-range = <0x0 0xff>;
                ranges = <0x81000000 0x0  0x3a100000 0x0  0x3a100000 0x0 0x100000 
                          0x82000000 0x0  0x40000000 0x1f 0x40000000 0x2 0x00000000 
                          0xc2000000 0x1c 0x0        0x1c 0x0        0x2 0x00000000>;
                nvidia,cfg-link-cap-l1sub = <0x1c4>;
                nvidia,cap-pl16g-status = <0x174>;
                nvidia,cap-pl16g-cap-off = <0x188>;
                nvidia,event-cntr-ctrl = <0x1d8>;
                nvidia,event-cntr-data = <0x1dc>;
                nvidia,margin-port-cap = <0x194>;
                nvidia,margin-lane-cntrl = <0x198>;
                nvidia,dl-feature-cap = <0x30c>;
                vddio-pex-ctl-supply = <0xa>;
                nvidia,enable-power-down;
                nvidia,disable-clock-request;
                nvidia,plat-gpios = <0x13 0xca 0x0 0x13 0x1 0x1>;
                phys = <0xb 0xc 0xd 0xe 0xf 0x10 0x11 0x12>;
                phy-names = "pcie-p2u-0", "pcie-p2u-1", "pcie-p2u-2", "pcie-p2u-3", "pcie-p2u-4", "pcie-p2u-5", "pcie-p2u-6", "pcie-p2u-7";
                linux,phandle = <0xe4>;
                phandle = <0xe4>;
        };

Any help would be appreciated.

Thanks.

I resolved the issue: I had to change the PCI address offset of the non-prefetchable range to 0x00000000. See the resulting ranges definition of the pcie node in the device tree (note, that I did adjust the sizes as well, just to align with the original total size of 16 GB and no gap at the host memory regions):

pcie@141a0000 {
		[...]
		ranges = <0x81000000 0x0  0x3a100000 0x0  0x3a100000 0x0 0x100000 
				  0x82000000 0x0  0x00000000 0x1f 0x0        0x1 0x0 
				  0xc2000000 0x1c 0x0        0x1c 0x0        0x3 0x0>;
		[...]
}

Cheers.

Glad to kow issue resolved!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.