Connecting AGX Orin (PCIe EP mode) to non-NVIDIA AP via PCIe

I’m using L4T R36.4.4

I am trying to connect Jetson AGX Orin configured in PCIe EP mode to non-NVIDIA AP as a PCIe Root Port. However, the Jetson AGX Orin is not being detected by the non-NVIDIA AP.
Are there any known limitations, required settings, or common pitfalls when using AGX Orin as a PCIe Endpoint with a non-NVIDIA host AP?
For reference, the non-NVIDIA AP uses a PCIe DEMUX to select either an NPU or the Jetson AGX Orin. I have confirmed that the NPU is successfully detected.

As a first step, I am attaching the related schematics and the files that were prepared for configuring Jetson AGX Orin in PCIe Endpoint mode. I would appreciate it if you could review them.

dt_files.zip (12.2 KB)

*** Please note that this reply is generated by LLM automatically ***

Based on the provided context, I will provide a comprehensive answer to the question.

The Jetson AGX Orin module supports PCIe Endpoint mode, which allows it to act as a peripheral device connected to a host system via a PCIe interface. However, there are some known limitations and required settings that need to be considered when using the AGX Orin as a PCIe Endpoint with a non-NVIDIA host AP.

Firstly, the AGX Orin module requires a specific configuration to operate in PCIe Endpoint mode. The module’s UPHY blocks need to be programmed to assign the correct lanes for PCIe operation. Table 7-3 in the Jetson Orin NX Series and Jetson Orin Nano Series Product Design Guide provides the supported UPHY mapping options for USB 3.2 and PCIe. The correct configuration needs to be selected and programmed into the module.

Secondly, the AGX Orin module’s PCIe controller needs to be configured to operate in Endpoint mode. The PCIe controller on the AGX Orin module supports dual mode, which means it can be configured as either a Root Port or an Endpoint. The controller needs to be configured to operate in Endpoint mode, and the correct settings need to be applied to the controller.

Thirdly, the non-NVIDIA host AP’s PCIe DEMUX needs to be configured to select the AGX Orin module as the peripheral device. The DEMUX needs to be programmed to select the correct lanes for the AGX Orin module, and the correct settings need to be applied to the DEMUX.

In terms of common pitfalls, one common issue is that the AGX Orin module’s PCIe controller may not be properly configured to operate in Endpoint mode. Another common issue is that the non-NVIDIA host AP’s PCIe DEMUX may not be properly configured to select the AGX Orin module as the peripheral device.

To troubleshoot the issue, I would recommend checking the following:

  1. Verify that the AGX Orin module’s UPHY blocks are properly programmed to assign the correct lanes for PCIe operation.
  2. Verify that the AGX Orin module’s PCIe controller is properly configured to operate in Endpoint mode.
  3. Verify that the non-NVIDIA host AP’s PCIe DEMUX is properly configured to select the AGX Orin module as the peripheral device.
  4. Check the schematics and configuration files to ensure that they are correct and match the requirements of the AGX Orin module and the non-NVIDIA host AP.

In comparison to other Jetson modules, the AGX Orin module has a similar PCIe Endpoint mode configuration requirement. However, the specific configuration settings and requirements may vary depending on the module and the host system.

For example, the Jetson Xavier NX series has a similar PCIe Endpoint mode configuration requirement, but the specific settings and requirements may vary depending on the module and the host system. The Jetson Xavier NX series has two PCIe controllers, C4 and C5, which support dual mode and can be configured as either Root Ports or Endpoints. The correct configuration settings need to be applied to the controllers to operate in Endpoint mode.

In summary, the AGX Orin module’s PCIe Endpoint mode configuration requires specific settings and configurations to be applied to the module’s UPHY blocks and PCIe controller. The non-NVIDIA host AP’s PCIe DEMUX also needs to be properly configured to select the AGX Orin module as the peripheral device. Common pitfalls include incorrect configuration of the AGX Orin module’s PCIe controller and the non-NVIDIA host AP’s PCIe DEMUX.

*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***

Although the configuration was applied as described above, the NPU is detected by lspci on the non-NVIDIA host AP’s PCIe, but the AGX Orin is not detected.

1.Verify that the AGX Orin module’s UPHY blocks are properly programmed to assign the correct lanes for PCIe operation.
=> Reffer to Jetson AGX Orin Platform Adaptation and Bring-Up — NVIDIA Jetson Linux Developer Guide , following configurations were applied for PCIe

*jetson-agx-ep.conf
..
ODMDATA=“gbe-uphy-config-1,hsstp-lane-map-3,hsio-uphy-config-0,nvhs-uphy-config-0”;

* tegra234-mb1-bct-gpio-p3701-0000-a04.dtsi

-remove
TEGRA234_MAIN_GPIO(AG, 0)
TEGRA234_MAIN_GPIO(AG, 1)

* tegra234-mb1-bct-pinmux-p3701-0000-a04.dtsi
pex_l7_clkreq_n_pag0 {
nvidia,pins = “pex_l7_clkreq_n_pag0”;
nvidia,function = “pe7”;
nvidia,pull = <TEGRA_PIN_PULL_NONE>;
nvidia,tristate = <TEGRA_PIN_DISABLE>;
nvidia,enable-input = <TEGRA_PIN_ENABLE>;
nvidia,io-high-voltage = <TEGRA_PIN_ENABLE>;
nvidia,lpdr = <TEGRA_PIN_DISABLE>;
};
pex_l7_rst_n_pag1 {
nvidia,pins = “pex_l7_rst_n_pag1”;
nvidia,function = “pe7”;
nvidia,pull = <TEGRA_PIN_PULL_NONE>;
nvidia,tristate = <TEGRA_PIN_DISABLE>;
nvidia,enable-input = <TEGRA_PIN_ENABLE>;
nvidia,io-high-voltage = <TEGRA_PIN_ENABLE>;
nvidia,lpdr = <TEGRA_PIN_DISABLE>;
};

2.Verify that the AGX Orin module’s PCIe controller is properly configured to operate in Endpoint mode.
=> following configurations were applied for PCIe EP mode
Since UPHY_REFCLK1_P and UPHY_REFCLK1_N are supplied by the non-NVIDIA host AP, SRNS was not used.

* tegra234-p3737-0000+p3701-0000.dts
pcie@141e0000 {
status = “disabled”;
num-lanes = <1>;

  nvidia,pex-prsnt-gpios = <&gpio_aon TEGRA234_AON_GPIO(EE, 5) GPIO_ACTIVE_LOW>;

  phys = <&p2u_gbe_0>;
  phy-names = "p2u-0";

};

pcie-ep@141e0000 {
status = “okay”;
num-lanes = <1>;

  reset-gpios = <&gpio TEGRA234_MAIN_GPIO(AG, 1) GPIO_ACTIVE_LOW>;

  nvidia,refclk-select-gpios = <&gpio TEGRA234_MAIN_GPIO(AG, 0) GPIO_ACTIVE_HIGH>;

  nvidia,pex-prsnt-gpios = <&gpio_aon TEGRA234_AON_GPIO(EE, 5) GPIO_ACTIVE_LOW>;

  phys = <&p2u_gbe_0>;
  phy-names = "p2u-0";

};

The nvidia,pex-prsnt-gpios = <&gpio_aon TEGRA234_AON_GPIO(EE, 5) GPIO_ACTIVE_LOW>; line was taken from a forum discussion addressing PCIe communication issues between Orin APs.

* tegra234-p3737-0000+p3701-0000-dynamic.dts
/* PCIe C7 endpoint */
fragment-t234-p3737-0000-p3701-0000-pcie-c7-ep@0 {
target-path = “/bus@0”;
board_config {
odm-data = “gbe-uphy-config-1”;
};
overlay {
pcie@141e0000 {
status = “disabled”;
};
pcie-ep@141e0000 {
status = “okay”;
};
};
};

3.Verify that the non-NVIDIA host AP’s PCIe DEMUX is properly configured to select the AGX Orin module as the peripheral device.
=> The host AP’s PCIe DEMUX is connected to both the NPU and the AGX Orin, and the NPU is confirmed to be detected via lspci

4.Check the schematics and configuration files to ensure that they are correct and match the requirements of the AGX Orin module and the non-NVIDIA host AP.
=> It is connected to the DEMUX using the same lane configuration as the NPU and is connected to the non-NVIDIA host AP’s PCIe.

This is an update based on PCIe EP issues found on the forums.
Please let me know if my understanding is correct.

  1. Even if only a single physical lane is used, all 8 lanes must be specified in the device tree.
  2. The num-lanes = <1>; property should be removed.
  3. The vddio-pex-ctl-supply = <&vdd_1v8_ls>; line was added by following the existing pcie-ep@141a0000 definition already present in the device tree.

* tegra234-p3737-0000+p3701-0000.dts

pcie-ep@141e0000 {
status = “okay”;

		vddio-pex-ctl-supply = <&vdd_1v8_ls>;
		
		reset-gpios = <&gpio TEGRA234_MAIN_GPIO(AG, 1) GPIO_ACTIVE_LOW>;

		nvidia,refclk-select-gpios = <&gpio TEGRA234_MAIN_GPIO(AG, 0) GPIO_ACTIVE_HIGH>;

		nvidia,pex-prsnt-gpios = <&gpio_aon TEGRA234_AON_GPIO(EE, 5) GPIO_ACTIVE_LOW>;
	
		phys = <&p2u_gbe_0>, <&p2u_gbe_1>, <&p2u_gbe_2>,
			<&p2u_gbe_3>, <&p2u_gbe_4>, <&p2u_gbe_5>,
			<&p2u_gbe_6>, <&p2u_gbe_7>;
		phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3", "p2u-4",
			"p2u-5", "p2u-6", "p2u-7";

I have one question.
In order for a non-NVIDIA AP (acting as a PCIe Root Port) to detect the NVIDIA device using lspci, does the NVIDIA side need to be configured as shown below?

In dmesg, the message
[ 423.427233] tegra194-pcie 141e0000.pcie-ep: Failed to init UPHY for PCIe EP: -22
appears after applying the command settings below and then resetting the Root Port side

modprobe pci-epf-dma-test
cd /sys/kernel/config/pci_ep/
mkdir functions/tegra_pcie_dma_epf/func1
echo 0x10de > functions/tegra_pcie_dma_epf/func1/vendorid
echo 0x229a > functions/tegra_pcie_dma_epf/func1/deviceid
echo 16 > functions/tegra_pcie_dma_epf/func1/msi_interrupts
ln -s functions/tegra_pcie_dma_epf/func1 controllers/${PCIE_EP_ADDR}.pcie-ep/
echo 1 > controllers/${PCIE_EP_ADDR}.pcie-ep/start

I am also attaching the current dmesg logs

dmesg.log (60.5 KB)

hdx@tegra-ubuntu:~$ sudo dmesg | grep pci
[ 0.309254] ehci-pci: EHCI PCI platform driver
[ 0.309496] ohci-pci: OHCI PCI platform driver
[ 7.962499] tegra194-pcie 14100000.pcie: Adding to iommu group 5
[ 7.965322] tegra194-pcie 14100000.pcie: host bridge /bus@0/pcie@14100000 ranges:
[ 7.965347] tegra194-pcie 14100000.pcie: MEM 0x2080000000..0x20a7ffffff → 0x2080000000
[ 7.965353] tegra194-pcie 14100000.pcie: MEM 0x20a8000000..0x20afffffff → 0x0040000000
[ 7.965356] tegra194-pcie 14100000.pcie: IO 0x0030100000..0x00301fffff → 0x0030100000
[ 7.965830] tegra194-pcie 14100000.pcie: iATU unroll: enabled
[ 7.965832] tegra194-pcie 14100000.pcie: Detected iATU regions: 8 outbound, 2 inbound
[ 8.072065] tegra194-pcie 14100000.pcie: Link up
[ 8.073277] tegra194-pcie 14100000.pcie: Link up
[ 8.073328] tegra194-pcie 14100000.pcie: PCI host bridge to bus 0001:00
[ 8.073332] pci_bus 0001:00: root bus resource [io 0x0000-0xfffff] (bus address [0x30100000-0x301fffff])
[ 8.073335] pci_bus 0001:00: root bus resource [mem 0x20a8000000-0x20afffffff] (bus address [0x40000000-0x47ffffff])
[ 8.073337] pci_bus 0001:00: root bus resource [bus 00-ff]
[ 8.073339] pci_bus 0001:00: root bus resource [mem 0x2080000000-0x20a7ffffff pref]
[ 8.073384] pci 0001:00:00.0: [10de:229e] type 01 class 0x060400
[ 8.073531] pci 0001:00:00.0: PME# supported from D0 D3hot
[ 8.077063] pci 0001:01:00.0: [14e4:a00d] type 00 class 0x020000
[ 8.077234] pci 0001:01:00.0: reg 0x10: [mem 0x00000000-0x0001ffff 64bit]
[ 8.077337] pci 0001:01:00.0: reg 0x18: [mem 0x00000000-0x0001ffff 64bit]
[ 8.077438] pci 0001:01:00.0: reg 0x20: [mem 0x00000000-0x0000ffff 64bit]
[ 8.078506] pci 0001:01:00.0: PME# supported from D0 D3hot D3cold
[ 8.078878] pci 0001:01:00.0: reg 0x1dc: [mem 0x00000000-0x0001ffff 64bit]
[ 8.078880] pci 0001:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x000dffff 64bit] (contains BAR0 for 7 VFs)
[ 8.078981] pci 0001:01:00.0: reg 0x1e4: [mem 0x00000000-0x0001ffff 64bit]
[ 8.078983] pci 0001:01:00.0: VF(n) BAR2 space: [mem 0x00000000-0x000dffff 64bit] (contains BAR2 for 7 VFs)
[ 8.079083] pci 0001:01:00.0: reg 0x1ec: [mem 0x00000000-0x0000ffff 64bit]
[ 8.079086] pci 0001:01:00.0: VF(n) BAR4 space: [mem 0x00000000-0x0006ffff 64bit] (contains BAR4 for 7 VFs)
[ 8.082897] pci 0001:00:00.0: BAR 14: assigned [mem 0x20a8000000-0x20a82fffff]
[ 8.082904] pci 0001:01:00.0: BAR 0: assigned [mem 0x20a8000000-0x20a801ffff 64bit]
[ 8.082984] pci 0001:01:00.0: BAR 2: assigned [mem 0x20a8020000-0x20a803ffff 64bit]
[ 8.083064] pci 0001:01:00.0: BAR 7: assigned [mem 0x20a8040000-0x20a811ffff 64bit]
[ 8.083100] pci 0001:01:00.0: BAR 9: assigned [mem 0x20a8120000-0x20a81fffff 64bit]
[ 8.083135] pci 0001:01:00.0: BAR 4: assigned [mem 0x20a8200000-0x20a820ffff 64bit]
[ 8.083214] pci 0001:01:00.0: BAR 11: assigned [mem 0x20a8210000-0x20a827ffff 64bit]
[ 8.083251] pci 0001:00:00.0: PCI bridge to [bus 01-ff]
[ 8.083256] pci 0001:00:00.0: bridge window [mem 0x20a8000000-0x20a82fffff]
[ 8.083355] pcieport 0001:00:00.0: Adding to iommu group 5
[ 8.083446] pcieport 0001:00:00.0: PME: Signaling with IRQ 192
[ 8.083749] pcieport 0001:00:00.0: AER: enabled with IRQ 192
[ 8.084573] tegra194-pcie 14160000.pcie: Adding to iommu group 7
[ 8.086979] tegra194-pcie 14160000.pcie: host bridge /bus@0/pcie@14160000 ranges:
[ 8.086992] tegra194-pcie 14160000.pcie: MEM 0x2140000000..0x2427ffffff → 0x2140000000
[ 8.086999] tegra194-pcie 14160000.pcie: MEM 0x2428000000..0x242fffffff → 0x0040000000
[ 8.087002] tegra194-pcie 14160000.pcie: IO 0x0036100000..0x00361fffff → 0x0036100000
[ 8.087337] tegra194-pcie 14160000.pcie: iATU unroll: enabled
[ 8.087340] tegra194-pcie 14160000.pcie: Detected iATU regions: 8 outbound, 2 inbound
[ 9.192259] tegra194-pcie 14160000.pcie: Phy link never came up
[ 10.192063] tegra194-pcie 14160000.pcie: Phy link never came up
[ 10.203602] tegra194-pcie 14160000.pcie: PCI host bridge to bus 0004:00
[ 10.203611] pci_bus 0004:00: root bus resource [io 0x100000-0x1fffff] (bus address [0x36100000-0x361fffff])
[ 10.203615] pci_bus 0004:00: root bus resource [mem 0x2428000000-0x242fffffff] (bus address [0x40000000-0x47ffffff])
[ 10.203618] pci_bus 0004:00: root bus resource [bus 00-ff]
[ 10.203619] pci_bus 0004:00: root bus resource [mem 0x2140000000-0x2427ffffff pref]
[ 10.203677] pci 0004:00:00.0: [10de:229c] type 01 class 0x060400
[ 10.203857] pci 0004:00:00.0: PME# supported from D0 D3hot
[ 10.251759] pci 0004:00:00.0: PCI bridge to [bus 01-ff]
[ 10.251974] pcieport 0004:00:00.0: Adding to iommu group 7
[ 10.252126] pcieport 0004:00:00.0: PME: Signaling with IRQ 194
[ 10.252263] pcieport 0004:00:00.0: AER: enabled with IRQ 194
[ 10.252560] pci_bus 0004:01: busn_res: [bus 01-ff] is released
[ 10.265990] pci 0004:00:00.0: Removing from iommu group 7
[ 10.266123] pci_bus 0004:00: busn_res: [bus 00-ff] is released
[ 10.272240] tegra194-pcie 141a0000.pcie: Adding to iommu group 56
[ 10.404670] tegra194-pcie 141a0000.pcie: host bridge /bus@0/pcie@141a0000 ranges:
[ 10.404712] tegra194-pcie 141a0000.pcie: MEM 0x2800000000..0x2b27ffffff → 0x2800000000
[ 10.404722] tegra194-pcie 141a0000.pcie: MEM 0x2b28000000..0x2b2fffffff → 0x0040000000
[ 10.404729] tegra194-pcie 141a0000.pcie: IO 0x003a100000..0x003a1fffff → 0x003a100000
[ 10.409379] tegra194-pcie 141a0000.pcie: iATU unroll: enabled
[ 10.409384] tegra194-pcie 141a0000.pcie: Detected iATU regions: 8 outbound, 2 inbound
[ 11.508599] tegra194-pcie 141a0000.pcie: Phy link never came up
[ 12.510415] tegra194-pcie 141a0000.pcie: Phy link never came up
[ 12.510513] tegra194-pcie 141a0000.pcie: PCI host bridge to bus 0005:00
[ 12.510518] pci_bus 0005:00: root bus resource [io 0x200000-0x2fffff] (bus address [0x3a100000-0x3a1fffff])
[ 12.510521] pci_bus 0005:00: root bus resource [mem 0x2b28000000-0x2b2fffffff] (bus address [0x40000000-0x47ffffff])
[ 12.510525] pci_bus 0005:00: root bus resource [bus 00-ff]
[ 12.510526] pci_bus 0005:00: root bus resource [mem 0x2800000000-0x2b27ffffff pref]
[ 12.510580] pci 0005:00:00.0: [10de:229a] type 01 class 0x060400
[ 12.510747] pci 0005:00:00.0: PME# supported from D0 D3hot
[ 12.517193] pci 0005:00:00.0: PCI bridge to [bus 01-ff]
[ 12.517378] pcieport 0005:00:00.0: Adding to iommu group 56
[ 12.517501] pcieport 0005:00:00.0: PME: Signaling with IRQ 196
[ 12.517844] pcieport 0005:00:00.0: AER: enabled with IRQ 196
[ 12.518369] pci_bus 0005:01: busn_res: [bus 01-ff] is released
[ 12.518862] pci 0005:00:00.0: Removing from iommu group 56
[ 12.518922] pci_bus 0005:00: busn_res: [bus 00-ff] is released
[ 12.521433] tegra194-pcie 141e0000.pcie-ep: Adding to iommu group 58
[ 423.427233] tegra194-pcie 141e0000.pcie-ep: Failed to init UPHY for PCIe EP: -22

Anything related to the UPHY setting means you probably forgot to udpate ODMDATA before doing flash.

I used the following ODMDATA configuration:
ODMDATA="gbe-uphy-config-1,hsstp-lane-map-3,hsio-uphy-config-0,nvhs-uphy-config-0".
To support PCIe EP mode on C7, gbe-uphy-config-0 was replaced with gbe-uphy-config-1.

The ODMDATA configuration is already correct. Are there any other items or configurations I should check?

583: pcie-ep@141e0000 {
584- power-domains = <0x03 0x10>;
585- iommus = <0xf3 0x08>;
586- nvidia,host1x = <0x10f>;
587- pinctrl-names = “default”;
588- dma-coherent;
589- nvidia,pex-prsnt-gpios = <0x108 0x25 0x01>;
590- interconnect-names = “dma-mem\0write”;
591- phy-names = “p2u-0\0p2u-1\0p2u-2\0p2u-3\0p2u-4\0p2u-5\0p2u-6\0p2u-7”;
592- nvidia,bpmp = <0x03 0x07>;
593- pinctrl-0 = <0x12a>;
594- clock-names = “core”;
595- interconnects = <0x58 0x2a 0x59 0x58 0x30 0x59>;
596- reg-names = “appl\0atu_dma\0dbi\0addr_space”;
597- nvidia,aspm-l0s-entrance-latency-us = <0x03>;
598- num-ob-windows = <0x08>;
599- resets = <0x03 0x0f 0x03 0x0e>;
600- interrupts = <0x00 0x162 0x04>;
601- clocks = <0x03 0xab>;
602- nvidia,enable-ext-refclk;
603- reset-gpios = <0xf1 0xc1 0x01>;
604- num-lanes = <0x08>;
605- compatible = “nvidia,tegra234-pcie-ep”;
606- vddio-pex-ctl-supply = <0x115>;
607- iommu-map-mask = <0x00>;
608- status = “okay”;
609- interrupt-names = “intr”;
610- phys = <0x122 0x123 0x124 0x125 0x126 0x127 0x128 0x129>;
611- reg = <0x00 0x141e0000 0x00 0x20000 0x00 0x3e040000 0x00 0x40000 0x00 0x3e080000 0x00 0x40000 0x2e 0x40000000 0x04 0x00>;
612- nvidia,refclk-select-gpios = <0xf1 0xc0 0x00>;
613- iommu-map = <0x00 0xf3 0x08 0x1000>;
614- reset-names = “apb\0core”;
615- nvidia,aspm-pwr-on-t-us = <0x14>;
616- nvidia,aspm-cmrt-us = <0x3c>;
617- num-ib-windows = <0x02>;
618- };

Could you dump the result of below?

sudo cat /sys/kernel/debug/bpmp/debug/uphy/config

hdx@tegra-ubuntu:~$ sudo cat /sys/kernel/debug/bpmp/debug/uphy/config

0x00041000

I have one question.
Is it possible to use C1 on UPHY0 as a PCIe Root Port and C7 on UPHY2 as a PCIe Endpoint simultaneously on a single Jetson AGX Orin?

All the PCIe controllers are independent. Using C1 on somewhere else won’t bother C7.

I understand that the value 0x00041000 indicates
GBE UPHY Config [22:18] = 1, which means that gbe-uphy-config-1 has been applied correctly.

Based on this, do you see anything unusual or incorrect in the pcie-ep@141e0000 device tree that I previously shared?
(Link for reference:
https://forums.developer.nvidia.com/t/connecting-agx-orin-pcie-ep-mode-to-non-nvidia-ap-via-pcie/360301/8?u=nyx)

In our setup, the non-NVIDIA AP acting as the Root Port is physically connected to the Jetson AGX Orin with only one PCIe lane.
However, I found forum discussions stating that, regardless of the physical lane connection, all 8 lanes must be specified in the device tree.

Based on that guidance, I updated the device tree to configure all 8 lanes as shown below:

pcie-ep@141e0000 {
    status = "okay";

    vddio-pex-ctl-supply = <&vdd_1v8_ls>;
    
    reset-gpios = <&gpio TEGRA234_MAIN_GPIO(AG, 1) GPIO_ACTIVE_LOW>;

    nvidia,refclk-select-gpios = <&gpio TEGRA234_MAIN_GPIO(AG, 0) GPIO_ACTIVE_HIGH>;

    nvidia,pex-prsnt-gpios = <&gpio_aon TEGRA234_AON_GPIO(EE, 5) GPIO_ACTIVE_LOW>;

    phys = <&p2u_gbe_0>, <&p2u_gbe_1>, <&p2u_gbe_2>,
           <&p2u_gbe_3>, <&p2u_gbe_4>, <&p2u_gbe_5>,
           <&p2u_gbe_6>, <&p2u_gbe_7>;
    phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3", "p2u-4",
                "p2u-5", "p2u-6", "p2u-7";
};

Could you please let me know if this 8-lane configuration looks correct, or if there are any issues with this device tree setup?

As a sanity check, does your board route the PEX_CLKx (REFCLK) to a 100MHz clock source and the PEX_Cx_CLKREQ and PEX_Cx_RST signals, all corresponding to the same controller you are using?