How to test xavier which is in endpoint mode in JetPack-4.2

I’m trying to use PCIe to communication between two xaviers. One is in the default mode (root complex). Another is configured as endpoint mode.

The software of rc-xavier is “JetPack-4.1”.
The software of ep-xavier is “JetPack-4.2”.

Ep-xavier is started up firstly and I typed the following commands:

cd /sys/kernel/config/pci_ep/
mkdir functions/pci_epf_nv_test/func1
echo 0x104c > functions/pci_epf_nv_test/func1/vendorid
echo 0xb500 > functions/pci_epf_nv_test/func1/deviceid
echo 16 > functions/pci_epf_nv_test/func1/msi_interrupts
ln -s functions/pci_epf_nv_test/func1 controllers/141a0000.pcie_ep/
echo 1 > controllers/141a0000.pcie_ep/start

(These commands are from https://devtalk.nvidia.com/default/topic/1048723/use-tegra-pcie-ep-mem-c-on-root-complex-to-test-the-endpoint-device-failed-in-dma_read-/?offset=1)

Rc-xavier is started up after the ep-xavier. Ep-xavier can be found as pcie device:

Before I try to use “tegra-pcie-ep-mem.c” to test the endpoint deivce, I realize that the deviceid I gave ep-xavier isn’t matched to the one in tegra-pcie-ep-mem.c. Then I change the deviceid from 0xb500 to 0x1ad4 in ep-xavier.

The problem is that “/sys/kernel/debug/tegra_pcie_ep” can’t be found in the system, which means ep_test_dma_probe() in tegra-pcie-ep-mem.c isn’t probed as expected. Is it correct that all the regions are disabled?

If I change the software of ep-xaiver back to “JetPack-4.1”. “/sys/kernel/debug/tegra_pcie_ep” is created correctly and “Kernel driver in use: tegra_ep_mem” is printed in command “lspci”:

So why it doesn’t work in JetPack-4.2?

JetPack-4.2 is moving towards getting Xavier-EP ready for “virtual ethernet over PCIe” communication mechanism and hence the changes in device-ID.
Is there any specific reason you are switching to Jetpack-4.2? If the purpose here is only to verify the Xavier-EP mode, then, Jetpack-4.1 should just work fine.

I tried to Jetpack-4.1 to test the endpoint before, but some issue occurred, that I have described in the post:
https://devtalk.nvidia.com/default/topic/1048723/use-tegra-pcie-ep-mem-c-on-root-complex-to-test-the-endpoint-device-failed-in-dma_read-/?offset=1)
In this post, one of your guys told that I need to use Jetpack-4.2 to test endpoint mode.
Please help to look into this old post to get the information about the issue occurred to me.
Thanks.

The commands given in the first comment in this thread are correct for L4T r32.1 (which is included in Jetpack 4.2).

On the root port system, lspci will show that all of the endpoint’s BAR regions are disabled, unless a driver runs on the root port system and explicitly enables the endpoint. This is normal PCIe behaviour.

I am not familiar with /sys/kernel/debug/tegra_pcie_ep or ep_test_dma_probe(); they are likely related to the PCIe endpoint support in the previous L4T release’s kernel, which as mentioned above has been replaced by a new much more flexible scheme in L4T r32.1.

There is a SW app note that describes how to test MMIO transactions over PCIe from user-space, but unfortunately I can’t find it right now. Here are some hints:

To enable the PCIe endpoint to respond to memory transactions:

  • Use lspci to find the bus/dev/function of the PCIe device
  • Run the following command to enable the device using the location from lspci:
    setpci -s 0005:01:00.0 COMMAND=0x02

To test memory transactions to the endpoint BAR (assuming the endpoint is running the pci_epf_nv_test driver):

  • On the root port system, use lspci to find the PCIe bus address of the endpoint’s BARs. This matches the root port’s physical CPU address used to access those BARs.
  • On the endpoint, run dmesg and look for a log message from the pci_epf_nv_test driver indicating the physical address of the memory that backs the BAR.
  • Use “busybox devmem” to read/write the BARs on the root port, using the relevant physical address from the previous points.

You can also enable the virtual-Ethernet-over-PCIe driver that’s part of L4T r32.1 by modifying the driver name that’s bound to the endpoint port in the command you quoted in the first comment in this thread. A host running L4T r32.1 contains the host driver that matches it, so the two should automatically bind together.

Thank you very much for the answer. It works by using “busybox devmem” to test the endpoint.
Now I am planning to use the “virtual-Ethernet-over-PCIe” driver. As far as I know, pci-epf-tegra-vnet.c is the source code of the slave driver. My question is where is the source code of the host driver for “virtual-Ethernet-over-PCIe”? And how do I use the “virtual-Ethernet-over-PCIe” driver for both sides?

Thanks.

In the same directory tree where ./drivers/pci/endpoint/functions/pci-epf-tegra-vnet.c exists, there is also ./drivers/net/ethernet/nvidia/pcie/tegra_vnet.c. That is the matching host driver.

The setup instructions on the endpoint are almost identical, except:
a) Replace the endpoint function name “pci_epf_nv_test” with “pci_epf_tvnet”.
b) Don’t set vendorid/deviceid since the pci_epf_tvnet function driver’s defaults match what the host driver expects.
c) I don’t think you need to set the MSI count.

So, that means on the endpoint:

cd /sys/kernel/config/pci_ep/
mkdir functions/pci_epf_tvnet/func1
ln -s functions/pci_epf_tvnet/func1 controllers/141a0000.pcie_ep/
echo 1 > controllers/141a0000.pcie_ep/start

On the host, simply make sure the tegra_vnet driver is plugged in, and it should automatically bind to the PCIe EP device once the two systems are booted. “ifconfig -a” should show an Ethernet interface, which you can use just like any other Linux Ethernet interface.

Thanks. After I can find the device in ifconfig, how can I write a application software to communication with each other? Is there any socket like function to use?

Thanks.

This driver exposes a standard Linux network interface, so all you need to do is:
a) Configure the network interface just like any other Linux network interface.
b) Use the sockets API to make a connection between the two applications and transfer data.

OK, Thanks.
Which parameter value I need to give the socket() function to establish the PCIe socket connection?

int socket(int domain, int type, int protocol);

You may use any network protocol and socket type that the Linux kernel supports and the is enabled when compiling it. The Ethernet-over-PCIe driver simply transports packets back/forth just like any other Ethernet adapter. Examples could be raw Ethernet, UDP, TCP, …

Perhaps the most common protocol you could use is TCP, which would correspond to:
socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);

Thanks. I get your point, but I have one more question here. How does it know the packets come from real ethernet or Ethernet-over-PCIe? Because from my understanding, the same parameters are passed into socket() when we communicate through real ethernet.

Thanks.

How does it know the packets come from real ethernet or Ethernet-over-PCIe?

What is “it”; your application, the Linux kernel, the Ethernet driver? Either way, the answer is probably “it doesn’t matter”. The kernel handles routing packets (e.g. your application’s responses) in a standard matter using the routing table, irrespective of the type of Ethernet connection, and the application doesn’t need to know anything about this, except at most which IP address to connect to. The key is that you will configure the Jetson<->Jetson Ethernet-over-PCIe network interfaces with a specific subnet that’s distinct from any other Ethernet network that Jetson is attached to.

The following might help with more background details:

  • Book: “Unix Network Programming” by W. Richard Stevens
  • The Linux Network Administrator’s Guide (https://www.tldp.org/LDP/nag2/nag2.pdf) (there’s also a 3rd edition out, which might only be available as a printed book.)

Is there a way I can do these things in the code instead of running these commands?
(I mean start the endpoint mode automatically in kernel during system starts up.)

Thanks.

I’m not aware of any way to make the kernel do this automatically without user-space requesting it. It’s quite easy to create a script to do the PCIe configuration at boot; see for example /etc/systemd/system/nv-l4t-usb-device-mode.service for how to create a systemd service that runs a script at boot.

OK,Thanks. Another question here.
In order to use this virtual PCIE Ethernet device, IP need to be set. But I find out that IP can only be set after the host side establish the PCIE link. Then my question is how can I set IP address automatically in endpoint side after PCIE link is established.

Thanks.

I think if you simply run “ifconfig” or “ip” commands after “echo 1 > controllers/141a0000.pcie_ep/start” then you should be able to set the IP on the endpoint.

Alternatively/additionally, take a look at /etc/network/interfaces, Network Manager, and systemd; they all have various ways to automatically configure network interfaces. These options should all work on either the root-port/host or the endpoint. You’ll have to choose the most appropriate option based on your use-case.

I have a question about the vnet. After these operations, there is an Ethernet interface named eth1 on the host, but there is no new Ethernet interface on the ep. Which interface on the ep can be communicated with the host ?

Hi 1271777143,

Please help to open a anew topic with more details of your issue. Thanks