Wanting to set Orin Dev Kit as an endpoint PCIe C5 EP to be connected over PCIe crossover cable, and am following instructions under Enable PCIe in a Customer CVB Design.
Step 1: in p3701.conf.common edit line 164 to configuration #2, ODMDATA=“gbe-uphy-config-0,hsstp-lane-map-3,hsio-uphy-config-16,**nvhs-uphy-config-0”;. There is broken link to T23x BCT Deployment Guide.
Step 3: in Jetson_AGX_Orin_Pinmux_Config_Template_082422.xlsm, edit rows 205:283 columns AS, AT, AY to GPIO(rsvd1), SFIO(PE*_CLKREQ_L), Input where * is 0-10. Changing column “Customer Usage” to respective values is not allowed, “This value doesn’t match the data validation restrictions defined for this cell.”?
Step 2 (listed after step 3.): in tegra234-p3737-pcie.dtsi line 43 under pcie_ep@141a0000 node “Add the pipe2uphy phandle entries as a phy property” “pipe2uphy DT nodes are defined in SoC DT”. Can’t find those values from tegra234-soc-pcie.dtsi below line 417 under node pcie_c5_ep: pcie_ep@141a0000. It is not clear what is the syntax for pipe2uphy, is it referring to phys, phy-names, also is status = “disabled”; supposed to be enabled?
Step 3 (listed after first step 3.): in tegra234-p3737-pcie.dtsi line 43 under pcie_ep@141a0000 node, “add the reset-gpios property with the gpio phandle, the gpio number connected to PERST# and flags(GPIO_ACTIVE_LOW)”. It is not clear what is the syntax for reset-gpios, gpio handle number?
Can these steps be given little more clarity as to what is the exact correct syntax of expected entries, or an example like patch given for PCIe x1 (C0), PCIe x8 (C7) in RP, just for PCie C5 EP pcie_ep@141a0000?
In subsection Hardware Requirements for a Tegra PCIe Endpoint Mode:
Step 3: is Orin crossover cable same as Xavier Jetson_AGX_Xavier_PCIe_Endpoint_Design_Guidelines.pdf Figure 3.?
In subsection Enabling the PCIe Endpoint on a Jetson AGX Orin Devkit:
Step 1: p3701.conf.common leave config #1 edit ODMDATA=“gbe-uphy-config-22,hsstp-lane-map-3,nvhs-uphy-config-1,hsio-uphy-config-0,gbe0-enable-10g”;
Step 2: flash Orin with sudo ./flash.sh jetson-agx-orin-devkit mmcblk0p1
In subsection Connecting and Configuring the Tegra PCIe Endpoint System:
Step 3: once booted, not sure if mount -t configfs none /sys/kernel/config needs to be executed (pci-endpoint-cfs.rst line 20)? Edit /sys/kernel/config/pci_ep/functions/pci_epf_nv_test/func1/vendorid and deviceid. If /sys/kernel/config/pci_ep/controllers/141a0000.pcie_ep/start shows up, edit. Boot RP system, proceed to Testing PCIe Endpoint Support steps.
Following instructions was able to change p3701.conf.common, flash, change tegra_defconfig, rebuild kernel, and execute all the steps upto and including busybox devmem 0x4307b8000 32 0xfa950000. However setpci -s 0005:01:00.0 COMMAND=0x02 step fails with Warning: No devices selected for “COMMAND=0x02”
is it predicated on correct PCIe cabling and RP booted or something else, any other diagnostic command to run on Orin EP standalone?
Instructions do not explicitly specify C5 Pinmux, is it already configured for Dev Kit by default or there is an extra step?
Crossover cable schematics in Jetson_AGX_Xavier_PCIe_Endpoint_Design_Guidelines.pdf
and “Note that the power rails of each connector have different net-names, so they are not
connected.” Does same schematic still apply for Orin Dev Kit EP, and if yes, does it read that all pins below AB11 are disconnected, or that only power pins are disconnected and some AB1-10 still connected PCI Express - Wikipedia?
If this is Orin devkit, then you only need to change the ODMDATA inside p3701.conf.common…
You don’t need to do anything else. For example, pinmux change is not need, tegra_defconfig is also not needed.
I am actually not sure why you think you need to run so many extra steps to make it work…
So pimux is already set, thanks. tegra_defconfig CONFIG_STRICT_DEVMEM=n seems to be needed otherwise busybox devmem 0x4307b8000 fails even with sudo. Not sure that there are any extra setup/configuration steps needed but maybe any diagnostic step, since there is an error at the last step setpci -s 0005:01:00.0 COMMAND=0x02? This command is to allow the PCIe endpoint to respond to PCIe memory accesses on the root port system.
Cables (swap board) to connect the two devices, Nvidia Jetson AGX Xavier PCIe Endpoint Design Guidelines (DA-09357) Figure 3., there is a note about power rails not being connected. Just to confirm no pins below A11, B11 are connected not even ground pins A4, B4?
Meant ground to ground pins A4<->B4, A18<->B18, A49<->B49.
The setpci -s 0005:01:00.0 COMMAND=0x02 Warning: No devices selected for “COMMAND=0x02”, any insight or additional logging/debug command around it?
The setpci -s 0001:00:00.0 COMMAND=0x02 succeeds but on RP busybox devmen read from and write to 0x70000000 does not have any effect, read is always 0xffffffff doesn’t match Orin 0xfa950000 at 0x199a3a000, and write doesn’t overwrite the value.
The current status is that am able to get past setpci step with different Orin device name than the one in documentation, and RP can see some RAM memory device but is not able to read from remotely or write to it, value is always 0xffffffff. The RP lspci starts with “On RP:” above, there are no other Nvidia entries.
No, I feel there is something wrong in the whole thread. Please just share the info I need first.
There is no need to explain by yourself. The log will tell the situation.
The document under Hardware Requirements has “you can use any standard x86-64 PC that is running Linux”, and based on PCIe standardization and from some of other Xavier pcie posts, that is to be expected. Unfortunately don’t have another Orin to test with, but the RP PC works with other PCIe cards including even Nvidia GPU and can see Orin as EP device, just can’t read/write from/to it. Has Nvida confirmed that PCie EP workflow works between host PC and Orin dev kit?
Based on EP and RP logs are there any issues that stand out or any other logs we need to run?
Thank you for the confirmation, Orin side was ok, missed setpci device name on RP side. devmem2 read and write works remotely both from RP and EP side.
Was trying to run a more realistic bandwidth/latency check with massive data transfer, and can see that there is a way by using RP /sys/kernel/debug/pcie-x/ (where x is one of 0,1,2,3) and cat write. Don’t see that subfolder on RP or other PCs, is it only enabled with CONFIG_PCIE_TEGRA_DW_DMA_TEST=y flag on Jetson? Or can be enabled on regular Linux PC also, closest flags by name seen in kernel .config are DMATEST=y or DMA_API_DEBUG=y?
Was looking also through JetPack kernel source samples, but don’t see direct standalone example with DMA and conversion from virtual to physical/bus address space. There is
kernel/nvidia/drivers/misc/tegra-pcie-ep-mem.c static int write(struct seq_file *s, void *data) and read(struct seq_file *s, void *data)
kernel/nvidia/drivers/pci/host/pcie-tegra-dw.c static int write(struct seq_file *s, void *data) and read(struct seq_file *s, void *data)
but don’t know how to populate these, s and data, and can it be used directly from main() reader on the RP side and main() writer on EP side? Any suggestions what could be a good base source code to run EP writes, RP reads?
rebuild kernel, after boot, after setpci, in /sys/kernel/debug there is only
drwxr-xr-x 2 root root 0 Dec 31 1969 dma-api
drwxr-xr-x 2 root root 0 Dec 31 1969 dma_buf
no pcie-x, so it is either some other .config flags or pcie-x is specific to Jetson kernel.
Since debug/pcie-x may not be an option on PC RP, can any of tegra-pcie-ep-mem.c or pcie-tegra-dw.cwrite() or write_ll() and read() or read_ll()
be potentially called from executable main() on RP and EP, if that is an option, what would be the correct code to populate s and data?
Still trying to check the PCIe EP write PC RP read DMA speed. So far it looks like Orin RP can be set for testing using /sys/kernel/debug/pcie-x/cat write based on enabling CONFIG_PCIE_TEGRA_DW_DMA_TEST=y and some patches covered in The bandwidth of of virtual ethernet over PCIe between two xaviers is low. But for PC RP kernel that flag does not exist, there are some PCIe DMA test flags but they don’t enable /sys/kernel/debug/pci-x on RP, also /pci/dma does not show any NV devices. Based on AGX Endpoint PCIe DMA speed test should be doable but it isn’t not clear what needs to be changed in which file and how to build on a PC?
Is there some guidance how to modify the kernel/nvidia/drivers/pci/dwc/pcie-tegra.c that has #ifdef CONFIG_PCIE_TEGRA_DW_DMA_TEST and build in order to eventually enable DMA write from virtual user space to physical/bus address on EP (Orin) and DMA read from physical address (000:01:00.0 NV RAM memory on RP) to user space on RP (PC) done from user space (main()), or does it require writing custom driver? Or maybe using mmap or such from user space.