Confused the detail about how flash EndPoint Mode

I’m using Jetson Orin NX 16GB and try to flash it to PCIe EndPoint Mode by following PCIe Endpoint Mode — NVIDIA Jetson Linux Developer Guide 1 documentation

What I confused is, where the ODMDATA="gbe-uphy-config-8,hsstp-lane-map-3,hsio-uphy-config-41"; should be put?

  1. when I put it in jetson-orin-nano-devkit-super.conf at the begin:
ODMDATA="gbe-uphy-config-8,hsstp-lane-map-3,hsio-uphy-config-41";
source "${LDK_DIR}/p3768-0000-p3767-0000-a0.conf";
p3767_super_overlay() {...}
update_flash_args() {...}

flash is OK but the following step is wrong: the controllers folder is empty, I can’t find 14160000.pcie-ep

2. 2.

  1. when I put it in jetson-orin-nano-devkit-super.conf at the end:
source "${LDK_DIR}/p3768-0000-p3767-0000-a0.conf";
p3767_super_overlay() {...}
update_flash_args() {...}
ODMDATA="gbe-uphy-config-8,hsstp-lane-map-3,hsio-uphy-config-41";

then I can’t flash success when run sudo ./tools/kernel_flash/l4t_initrd_flash.sh --external-device nvme0n1p1 -c tools/kernel_flash/flash_l4t_t234_nvme.xml -p "-c bootloader/generic/cfg/flash_t234_qspi.xml" --showlogs --network usb0 jetson-orin-nano-devkit-super internal(my memory device is NVME):

Did I do anything wrong? The *.conf file? The hsio-uphy-config-41 set location? Or something else? Thanks very much!

*** Please note that this reply is generated by LLM automatically ***

To flash the Jetson Orin NX 16GB to PCIe Endpoint Mode, you need to follow the steps outlined in the NVIDIA Jetson Linux Developer Guide. Based on the provided context, it seems like you are having trouble finding the 14160000.pcie-ep image and the 14581105 file.

Here’s a step-by-step guide to help you flash the Jetson Orin NX 16GB to PCIe Endpoint Mode:

  1. Edit the jetson-agx-orin-devkit.conf file: In the extracted Jetson Linux release directory, edit the jetson-agx-orin-devkit.conf file. Add the following line to override ODMDATA:

    ODMDATA="gbe-uphy-config-22,nvhs-uphy-config-1,hsio-uphy-config-0,gbe0-enable-10g,hsstp-lane-map-3";
    

    This sets the nvhs-uphy-config-1 to enable PCIe Endpoint Mode.

  2. Reflash the device: Run the following command to reflash the device:

    # sudo ./flash.sh jetson-agx-orin-devkit mmcblk0p1
    

    This step completely erases data that was previously stored on the Jetson device.

  3. Delete ODMDATA from jetson-agx-orin-devkit.conf: After reflashing the device, delete the ODMDATA line from the jetson-agx-orin-devkit.conf file to restore the property’s original value. This ensures that devices flashed in the future will operate in PCIe root port mode.

Regarding the missing files, it’s possible that the 14160000.pcie-ep image and the 14581105 file are not generated correctly. Make sure you are running the correct commands and that the files are being generated in the correct location.

Comparison with other Jetson modules:

  • The Jetson AGX Orin series and the Jetson Orin NX series have different configuration files and flashing procedures. Make sure you are using the correct configuration file and flashing procedure for your specific module.
  • The Jetson Orin NX series requires the use of the initrd flash method, which is different from the flashing procedure for the Jetson AGX Orin series.

Troubleshooting:

  • If you are still having trouble finding the 14160000.pcie-ep image and the 14581105 file, try running the flashing command with the --showlogs option to see if there are any error messages that can help you diagnose the issue.
  • Make sure you are running the correct version of the Jetson Linux release and that you have the necessary dependencies installed.

I hope this helps you resolve the issue and successfully flash your Jetson Orin NX 16GB to PCIe Endpoint Mode.

*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***

I suddenly thought that, I’m flash system to NVME, which is connected to the board via PCIe. Maybe that’s why said can’t find device nvme? What can I do…

I would say make sure you could really flash your board before enable anything.

Yes, if I remove the ODMDATA, it can be flash success

Just want to clarify that the enable PCIe endpoint section is not a feasible items to test on Orin Nano devkit.

It is for a custom board to use. Also, PCIe endpoint and rootport is a either-or choice, if you are using a EP then it won’t be possible to be a RP to detect a NVMe at same time.

I do something try:

1. Seems there are 4 PCIe on my board: network controller, Ethernet controller, memory controller and the one my target to set EP mode. Evidence is:

when I run lspci -nn:

0001:00:00.0 PCI bridge [0604]: NVIDIA Corporation Device [10de:229e] (rev a1)
0001:01:00.0 Network controller [0280]: Realtek Semiconductor Co., Ltd. RTL8822CE 802.11ac PCIe Wireless Network Adapter [10ec:c822]

0004:00:00.0 PCI bridge [0604]: NVIDIA Corporation Device [10de:229c] (rev a1)
0004:01:00.0 Non-Volatile memory controller [0108]: Realtek Semiconductor Co., Ltd. Device [10ec:5765] (rev 01)

0008:00:00.0 PCI bridge [0604]: NVIDIA Corporation Device [10de:229c] (rev a1)
0008:01:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15)

and when I run sudo dmesg | grep -i pcie:

[ 4.924630] tegra194-pcie 14100000.pcie: Adding to iommu group 3
[ 4.927485] tegra194-pcie 14100000.pcie: host bridge /bus@0/pcie@14100000 ranges:
[ 4.927509] tegra194-pcie 14100000.pcie: MEM 0x2080000000..0x20a7ffffff → 0x2080000000
[ 4.927515] tegra194-pcie 14100000.pcie: MEM 0x20a8000000..0x20afffffff → 0x0040000000
[ 4.927519] tegra194-pcie 14100000.pcie: IO 0x0030100000..0x00301fffff → 0x0030100000
[ 4.927831] tegra194-pcie 14100000.pcie: iATU unroll: enabled
[ 4.927833] tegra194-pcie 14100000.pcie: Detected iATU regions: 8 outbound, 2 inbound
[ 5.032548] tegra194-pcie 14100000.pcie: Link up
[ 5.034008] tegra194-pcie 14100000.pcie: Link up
[ 5.034072] tegra194-pcie 14100000.pcie: PCI host bridge to bus 0001:00
[ 5.051051] pcieport 0001:00:00.0: Adding to iommu group 3
[ 5.051143] pcieport 0001:00:00.0: PME: Signaling with IRQ 190
[ 5.052504] pcieport 0001:00:00.0: AER: enabled with IRQ 190

[ 5.053376] tegra194-pcie 14160000.pcie: Adding to iommu group 5
[ 5.057411] tegra194-pcie 14160000.pcie: host bridge /bus@0/pcie@14160000 ranges:
[ 5.057429] tegra194-pcie 14160000.pcie: MEM 0x2140000000..0x2427ffffff → 0x2140000000
[ 5.057436] tegra194-pcie 14160000.pcie: MEM 0x2428000000..0x242fffffff → 0x0040000000
[ 5.057439] tegra194-pcie 14160000.pcie: IO 0x0036100000..0x00361fffff → 0x0036100000
[ 5.057934] tegra194-pcie 14160000.pcie: iATU unroll: enabled
[ 5.057937] tegra194-pcie 14160000.pcie: Detected iATU regions: 8 outbound, 2 inbound
[ 5.164299] tegra194-pcie 14160000.pcie: Link up
[ 5.165662] tegra194-pcie 14160000.pcie: Link up
[ 5.165713] tegra194-pcie 14160000.pcie: PCI host bridge to bus 0004:00
[ 5.182945] pcieport 0004:00:00.0: Adding to iommu group 5
[ 5.183026] pcieport 0004:00:00.0: PME: Signaling with IRQ 192
[ 5.184296] pcieport 0004:00:00.0: AER: enabled with IRQ 192

[ 5.185760] tegra194-pcie 141e0000.pcie: Adding to iommu group 6
[ 5.188008] tegra194-pcie 141e0000.pcie: host bridge /bus@0/pcie@141e0000 ranges:
[ 5.188024] tegra194-pcie 141e0000.pcie: MEM 0x3000000000..0x3227ffffff → 0x3000000000
[ 5.188031] tegra194-pcie 141e0000.pcie: MEM 0x3228000000..0x322fffffff → 0x0040000000
[ 5.188035] tegra194-pcie 141e0000.pcie: IO 0x003e100000..0x003e1fffff → 0x003e100000
[ 5.188532] tegra194-pcie 141e0000.pcie: iATU unroll: enabled
[ 5.188535] tegra194-pcie 141e0000.pcie: Detected iATU regions: 8 outbound, 2 inbound
[ 6.300303] tegra194-pcie 141e0000.pcie: Phy link never came up
[ 7.300336] tegra194-pcie 141e0000.pcie: Phy link never came up
[ 7.300498] tegra194-pcie 141e0000.pcie: PCI host bridge to bus 0007:00
[ 7.312697] pcieport 0007:00:00.0: Adding to iommu group 6
[ 7.325601] pcieport 0007:00:00.0: PME: Signaling with IRQ 194
[ 7.334600] pcieport 0007:00:00.0: AER: enabled with IRQ 194

[ 7.378497] tegra194-pcie 140a0000.pcie: Adding to iommu group 51
[ 7.520839] tegra194-pcie 140a0000.pcie: host bridge /bus@0/pcie@140a0000 ranges:
[ 7.520870] tegra194-pcie 140a0000.pcie: MEM 0x3240000000..0x3527ffffff → 0x3240000000
[ 7.520876] tegra194-pcie 140a0000.pcie: MEM 0x3528000000..0x352fffffff → 0x0040000000
[ 7.520879] tegra194-pcie 140a0000.pcie: IO 0x002a100000..0x002a1fffff → 0x002a100000
[ 7.529240] tegra194-pcie 140a0000.pcie: iATU unroll: enabled
[ 7.529248] tegra194-pcie 140a0000.pcie: Detected iATU regions: 8 outbound, 2 inbound
[ 7.638242] tegra194-pcie 140a0000.pcie: Link up
[ 7.641045] tegra194-pcie 140a0000.pcie: Link up
[ 7.647848] tegra194-pcie 140a0000.pcie: PCI host bridge to bus 0008:00
[ 7.704031] pcieport 0008:00:00.0: Adding to iommu group 51
[ 7.704189] pcieport 0008:00:00.0: PME: Signaling with IRQ 188
[ 7.706069] pcieport 0008:00:00.0: AER: enabled with IRQ 188

So I think 141e0000 is my target PCIe.

2. Then I try to set EP mode only for 141e0000 (not set ODMDATA as tutorial), by modifying /boot/dtb/kernel_tegra234-p3768-0000+p3767-0000-nv-super.dtb to change the status about its pcie & pcie-ep:

pcie@141e0000 {
compatible = “nvidia,tegra234-pcie”;
power-domains = <0x03 0x10>;
reg = <0x00 0x141e0000 0x00 0x20000 0x00 0x3e000000 0x00 0x40000 0x00 0x3e040000 0x00 0x40000 0x00 0x3e080000 0x00 0x40000 0x32 0x30000000 0x00 0x10000000>;
reg-names = “appl\0config\0atu_dma\0dbi\0ecam”;
#address-cells = <0x03>;
#size-cells = <0x02>;
device_type = “pci”;
num-lanes = <0x08>;
num-viewport = <0x08>;
linux,pci-domain = <0x07>;
clocks = <0x03 0xab>;
clock-names = “core”;
resets = <0x03 0x0f 0x03 0x0e>;
reset-names = “apb\0core”;
interrupts = <0x00 0x162 0x04 0x00 0x163 0x04>;
interrupt-names = “intr\0msi”;
#interrupt-cells = <0x01>;
interrupt-map-mask = <0x00 0x00 0x00 0x00>;
interrupt-map = <0x00 0x00 0x00 0x00 0x01 0x00 0x162 0x04>;
nvidia,bpmp = <0x03 0x07>;
nvidia,aspm-cmrt-us = <0x3c>;
nvidia,aspm-pwr-on-t-us = <0x14>;
nvidia,aspm-l0s-entrance-latency-us = <0x03>;
bus-range = <0x00 0xff>;
ranges = <0x43000000 0x30 0x00 0x30 0x00 0x02 0x28000000 0x2000000 0x00 0x40000000 0x32 0x28000000 0x00 0x8000000 0x1000000 0x00 0x3e100000 0x00 0x3e100000 0x00 0x100000>;
interconnects = <0x57 0x2a 0x58 0x57 0x30 0x58>;
interconnect-names = “dma-mem\0write”;
iommu-map = <0x00 0xf0 0x08 0x1000>;
iommu-map-mask = <0x00>;
dma-coherent;
status = “disabled”;
vddio-pex-ctl-supply = <0xf5>;
phys = <0x118 0x119>;
phy-names = “p2u-0\0p2u-1”;
iommus = <0xf0 0x08>;
};

pcie-ep@141e0000 {
compatible = “nvidia,tegra234-pcie-ep”;
power-domains = <0x03 0x10>;
reg = <0x00 0x141e0000 0x00 0x20000 0x00 0x3e040000 0x00 0x40000 0x00 0x3e080000 0x00 0x40000 0x2e 0x40000000 0x04 0x00>;
reg-names = “appl\0atu_dma\0dbi\0addr_space”;
num-lanes = <0x08>;
clocks = <0x03 0xab>;
clock-names = “core”;
resets = <0x03 0x0f 0x03 0x0e>;
reset-names = “apb\0core”;
interrupts = <0x00 0x162 0x04>;
interrupt-names = “intr”;
nvidia,bpmp = <0x03 0x07>;
nvidia,enable-ext-refclk;
nvidia,aspm-cmrt-us = <0x3c>;
nvidia,aspm-pwr-on-t-us = <0x14>;
nvidia,aspm-l0s-entrance-latency-us = <0x03>;
interconnects = <0x57 0x2a 0x58 0x57 0x30 0x58>;
interconnect-names = “dma-mem\0write”;
iommu-map = <0x00 0xf0 0x08 0x1000>;
iommu-map-mask = <0x00>;
dma-coherent;
status = “okay”;
phys = <0x118 0x119>;
phy-names = “p2u-0\0p2u-1”;
iommus = <0xf0 0x08>;
pinctrl-names = “default”;
pinctrl-0 = <0x11a>;
nvidia,host1x = <0x110>;
num-ib-windows = <0x02>;
num-ob-windows = <0x08>;
};

Replace this file with host-PC > L4T/kernel/dtb/tegra234-p3768-0000+p3767-0000-nv-super.dtb, and flash again by running sudo ./tools/kernel_flash/l4t_initrd_flash.sh --external-device nvme0n1p1 -c tools/kernel_flash/flash_l4t_t234_nvme.xml -p "-c bootloader/generic/cfg/flash_t234_qspi.xml" --showlogs --network usb0 jetson-orin-nano-devkit-super internal

It worked, the /boot/dtb/kernel_tegra234-p3768-0000+p3767-0000-nv-super.dtband the /proc/device-tree have been modified. To compare, the 141e0000 is pcie-mode while the 14160000 (the NVME SSD) is pcie.

But the controllers folder is also empty, and the output of sudo dmesg | grep -i "141e0000\|pcie.*ep\|tegra.*pcie" is:

[ 6.505319] tegra194-pcie 141e0000.pcie-ep: Adding to iommu group 3
[ 6.506001] tegra194-pcie 141e0000.pcie-ep: Failed to find PHY entries: -22
[ 6.506004] tegra194-pcie 141e0000.pcie-ep: Failed to parse device tree: -22
[ 6.506023] tegra194-pcie: probe of 141e0000.pcie-ep failed with error -22

  1. So miss mapping to the physical addr, right? I try to add PHY in the dtb file:

pcie-ep@141e0000 {
compatible = “nvidia,tegra234-pcie-ep”;
power-domains = <0x03 0x10>;
reg = <0x00 0x141e0000 0x00 0x20000 0x00 0x3e040000 0x00 0x40000 0x00 0x3e080000 0x00 0x40000 0x2e 0x40000000 0x04 0x00>;
reg-names = “appl\0atu_dma\0dbi\0addr_space”;
num-lanes = <0x08>;
clocks = <0x03 0xab>;
clock-names = “core”;
resets = <0x03 0x0f 0x03 0x0e>;
reset-names = “apb\0core”;
interrupts = <0x00 0x162 0x04>;
interrupt-names = “intr”;
nvidia,bpmp = <0x03 0x07>;
nvidia,enable-ext-refclk;
nvidia,aspm-cmrt-us = <0x3c>;
nvidia,aspm-pwr-on-t-us = <0x14>;
nvidia,aspm-l0s-entrance-latency-us = <0x03>;
interconnects = <0x57 0x2a 0x58 0x57 0x30 0x58>;
interconnect-names = “dma-mem\0write”;
iommu-map = <0x00 0xf0 0x08 0x1000>;
iommu-map-mask = <0x00>;
dma-coherent;
status = “okay”;
phys = <0x118 0x119>;
phy-names = “p2u-0\0p2u-1”;

iommus = <0xf0 0x08>;
pinctrl-names = “default”;
pinctrl-0 = <0x11a>;
nvidia,host1x = <0x110>;
num-ib-windows = <0x02>;
num-ob-windows = <0x08>;
};

and flash again. I find this file has been modified, but the /proc/device-tree no change, and the msg is also:

[ 6.505319] tegra194-pcie 141e0000.pcie-ep: Adding to iommu group 3
[ 6.506001] tegra194-pcie 141e0000.pcie-ep: Failed to find PHY entries: -22
[ 6.506004] tegra194-pcie 141e0000.pcie-ep: Failed to parse device tree: -22
[ 6.506023] tegra194-pcie: probe of 141e0000.pcie-ep failed with error -22

Can you have a look about my question:

  1. Can I set EP mode only for one PCIe interface?
  2. Was my action effective?
  3. Why the modify about PHY is not work?

Thank you so much~

Hi,

You need to understand how things work here.

There is nothing called “So I think 141e0000 is my target PCIe.”.

This is depending on hardware. If your hardware is trying to use PCIe C7 as EP, that that is 141e0000.
If you totally have no hardware ready there, then 141e0000 won’t be your EP mode…

So what you need here is go back and ask your hardware engineer for this info.

Nothing to “I think” here. This is totally what your hardware connection is…

Also, we don’t support C7 as EP mode as design guide mentioned…

Yes I made a mistake before, thanks for your correction. And I encountered another question:

I use another NVME SSD to replace the original NVME (C4 → C7)

0001:00:00.0 PCI bridge: NVIDIA Corporation Device 229e (rev a1)
0001:01:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8822CE 802.11ac PCIe Wireless Network Adapter
0007:00:00.0 PCI bridge: NVIDIA Corporation Device 229a (rev a1)
0007:01:00.0 Non-Volatile memory controller: MAXIO Technology (Hangzhou) Ltd. NVMe SSD Controller MAP1202 (rev 01)
0008:00:00.0 PCI bridge: NVIDIA Corporation Device 229c (rev a1)
0008:01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)

Then the C4 controller is free, so I can use it as my goal to connect two Orin devices. I flash one as root mode and the other one endpoint mode, it’s OK; But after connection, they cann’t recognize each other, the error msg is “Phy link never came up“.

I think my link is right:

Root machine <---- M.2 to oculink <---- oculink cable -----> M.2 to oculink -----> endpoint machine

Seems the EP mode is enable success also, /sys/kernel/config/pci_ep/controllers/14160000.pcie-ep appear, and its device-tree:

pcie-ep@14160000 {
			power-domains = <0x03 0x08>;
			iommus = <0x04 0x13>;
			nvidia,host1x = <0x110>;
			pinctrl-names = "default";
			dma-coherent;
			interconnect-names = "dma-mem\0write";
			phy-names = "p2u-0\0p2u-1\0p2u-2\0p2u-3";
			nvidia,bpmp = <0x03 0x04>;
			pinctrl-0 = <0x11f>;
			clock-names = "core";
			interconnects = <0x57 0xe0 0x58 0x57 0xe1 0x58>;
			reg-names = "appl\0atu_dma\0dbi\0addr_space";
			nvidia,aspm-l0s-entrance-latency-us = <0x03>;
			num-ob-windows = <0x08>;
			resets = <0x03 0x7d 0x03 0x78>;
			interrupts = <0x00 0x33 0x04>;
			clocks = <0x03 0xe0>;
			nvidia,enable-ext-refclk;
			reset-gpios = <0xf3 0x59 0x01>;
			num-lanes = <0x04>;
			compatible = "nvidia,tegra234-pcie-ep";
			status = "okay";
			interrupt-names = "intr";
			phys = <0x112 0x113 0x114 0x115>;
			reg = <0x00 0x14160000 0x00 0x20000 0x00 0x36040000 0x00 0x40000 0x00 0x36080000 0x00 0x40000 0x21 0x40000000 0x03 0x00>;
			nvidia,refclk-select-gpios = <0x105 0x04 0x00>;
			reset-names = "apb\0core";
			nvidia,aspm-pwr-on-t-us = <0x14>;
			nvidia,aspm-cmrt-us = <0x3c>;
			num-ib-windows = <0x02>;
		};

pcie@14160000 {
			power-domains = <0x03 0x08>;
			iommus = <0x04 0x13>;
			#address-cells = <0x03>;
			dma-coherent;
			interconnect-names = "dma-mem\0write";
			phy-names = "p2u-0\0p2u-1\0p2u-2\0p2u-3";
			nvidia,bpmp = <0x03 0x04>;
			bus-range = <0x00 0xff>;
			clock-names = "core";
			interconnects = <0x57 0xe0 0x58 0x57 0xe1 0x58>;
			reg-names = "appl\0config\0atu_dma\0dbi\0ecam";
			nvidia,aspm-l0s-entrance-latency-us = <0x03>;
			resets = <0x03 0x7d 0x03 0x78>;
			interrupts = <0x00 0x33 0x04 0x00 0x34 0x04>;
			clocks = <0x03 0xe0>;
			interrupt-map = <0x00 0x00 0x00 0x00 0x01 0x00 0x33 0x04>;
			#size-cells = <0x02>;
			device_type = "pci";
			interrupt-map-mask = <0x00 0x00 0x00 0x00>;
			num-lanes = <0x04>;
			compatible = "nvidia,tegra234-pcie";
			vddio-pex-ctl-supply = <0xf5>;
			ranges = <0x43000000 0x21 0x40000000 0x21 0x40000000 0x02 0xe8000000 0x2000000 0x00 0x40000000 0x24 0x28000000 0x00 0x8000000 0x1000000 0x00 0x36100000 0x00 0x36100000 0x00 0x100000>;
			iommu-map-mask = <0x00>;
			#interrupt-cells = <0x01>;
			status = "disabled";
			interrupt-names = "intr\0msi";
			phys = <0x112 0x113 0x114 0x115>;
			num-viewport = <0x08>;
			reg = <0x00 0x14160000 0x00 0x20000 0x00 0x36000000 0x00 0x40000 0x00 0x36040000 0x00 0x40000 0x00 0x36080000 0x00 0x40000 0x24 0x30000000 0x00 0x10000000>;
			linux,pci-domain = <0x04>;
			iommu-map = <0x00 0x04 0x13 0x1000>;
			reset-names = "apb\0core";
			nvidia,aspm-pwr-on-t-us = <0x14>;
			nvidia,aspm-cmrt-us = <0x3c>;
		};

The pcie endpoint connection has hardware requirement that has a note in the design guide document.

Are you sure that “M.2 to oculink <---- oculink cable -----> M.2 to oculink” think you are using there really follow the design guideline?

Also, did you remember to change ODMDATA to enable endpoint…?

The design guide document is PCIe Endpoint Mode — NVIDIA Jetson Linux Developer Guide ? Seems like there are not any more requirements…

Can you provide a more specific introduction?And if “M.2 to oculink <---- oculink cable -----> M.2 to oculink” is inappropriate, what is the more common practice?

BTW I have change ODMDATA to enable endpoint as PCIe Endpoint Mode — NVIDIA Jetson Linux Developer Guide

No, that is not the design guide document.

Please refer to Orin AGX design guide doc for this picture. Even Orin NX/Nano needs to follow similar design except that you are using C4 but not C5.

Can you share the document link? Thanks very much.

These two web pages are the items you should need to know in your day1 starting using Jetson.

hardware document would be on the download center and software doc would be in jetson archive.

I didn’t do a good job of collecting information in the early stage, sorry.

I may have discovered the problem, but it’s strange:

  • In AGX series doc, the EP_READY_N signal is necessary, and my M.2-to-oculink cable not brought up it !

  • but in NX series doc, there is no such signal; In fact, it even never mention endpoint:

I really doubt if it supports endpoint mode, but the document also states that the C4 controller can be configured to endpoint mode…

This GPIO could be any other GPIO as the document mentioned.

I’m not familiar with hardware, so I’d like to confirm a question first:

If I want to interconnect two identical orin NX devices via PCIE, do I need to customize the carrier board of the endpoint device instead of using the official one? Such as the signal direction or reference clock?

If neednot, Apart from the following operations, what else specifically do I need to do to proceed with the configuration? I apologize for not knowing how to operate the GPIO port you mentioned…

Just want to clarify that … I already told about what you are trying to ask since the beginning of this post.

Okay, I was too stubborn.

Can AGX Orin can do that? It seems that its support for EP mode is much stronger than that of the NX series, with a built-in EP_READY signal. Does its EP device also require a customized carrier board?

And about the EP_READY pin, is a specially made link-cable required?