The ethernet interface over PCIe(endpoint mode) not work

Hi hermes_wu,

Share detail steps for you reference.

  1. Prepare devices on EP and RP side:
    a) Edit the jetson-xavier.conf file and change ODMDATA to:
    b) Apply the patch and re-build the kernel Image
    c) Replace your rebuild kernel Image to JetPack-5.1.2/Linux_for_Tegra/kernel/Image
    d) Full flash the image:
    $ sudo ./ jetson-xavier mmcblk0p1

    Using default JetPack-5.1.2 ODMDATA to flash image on RP Xavier.

  2. Connect two system using the appropriate PCIe cable

  3. Boot the EP device

  4. Run below command on EP side:

    # cd /sys/kernel/config/pci_ep/
    # mkdir functions/pci_epf_tvnet/func1
    # echo 16 > functions/pci_epf_tvnet/func1/msi_interrupts
    # ln -s functions/pci_epf_tvnet/func1 controllers/141a0000.pcie_ep/
    # echo 1 > controllers/141a0000.pcie_ep/start
  5. Boot the RP device

  6. After boot up and using ifconfig to check ethernet interface (mtu=64512)

  7. Suggest you can use Desktop GUI interface to setup the Ethernet static IP and ping.

Please reference detail from: PCIe Endpoint Mode — Jetson Linux Developer Guide documentation

Hi @carolyuu

The steps 1 to 6 are done.
The only difference is step 7, I disabled the NetworkManagement service and installed Netplan to replace it. Which is mentioned by WayneWWW before.

No need to disable that anymore. Just use GUI to set IP.

Hi @WayneWWW

May I know why can’t use the Netplan as before if NetworkManagement works?

Because we have used the Netplan for our demo scenario already.

We didn’t test that. So don’t know if Netplan would be needed or not.

Hi @WayneWWW

The PCIe virtual network worked when I used NetworkManagement to setup static IP.
But the performance is not good, especially form RC to EP side.

Iperf3 test on EP side(PCIe Gen.3 x4):


It is expected that the performance is worse when EP=server, RP=client test case.

Hi @WayneWWW

But no matter from EP to RC, or from RC to EP, the transmission rate is far from the speed of PCIe Gen.3 x4(MAX: 31.504 Gbps).

Yes, that is expected.


Can I interpret this result as a limitation of Xavier and Orin?

Yes, after you set jetson_clocks enabled, it shall reach it full limitation.

Hi @WayneWWW

Can you give me more information about this limitation? Because we need to explain it to our customers.


I just checked some history test result. The performance of PCIe-vnet on your side is expected result as our side.

If you want to check the real DMA read/write which is not based on Vnet, then it could be faster.

But both cases have write performance faster than read perf.

The read and write operation main difference is Non-Posted and Posted Transactions

i)Non-posted transactions are requests that return a Completion TLP from the Completer, indicating that the transaction was successfully processed.
Memory Read is Non-posted ,For read requests, the Completer returns a Completion with Data TLP.
Memory read transactions use the split-completion model. The requester sends the read TLP. After the completer fetches the data from memory, it sends the data back in the form of
Completion with Data TLPs.
ii)Memory Writes are posted transactions. These transactions do not require a Completion TLP from the packet’s ultimate destination.

Hi @WayneWWW

If we want to check the real DMA read/write which is not based on Vnet, can we verify it on Xavier and Orin following the guide from the link below?

RP Mode DMA:

In the below procedure, x being the number of the root port controller whose DMA is being used for perf checkout

  1. Go to the debugfs directory of the root port controller

cd /sys/kernel/debug/pcie-x/

  1. Set channel number (set it to one of 0,1,2,3)

echo 1 > channel

3 Set size to 512MB

echo 0x20000000 > size

  1. Set source address for DMA.
    For this, grep for the string “—> Allocated memory for DMA” in dmesg log and use whatever address comes up in the grep output

dmesg | grep " ---> Allocated memory for DMA"

example output would be something like

[ 7.102149] tegra-pcie-dw 141a0000.pcie: —> Allocated memory for DMA @ 0xC0000000

So, use 0xC0000000 as the source address

echo 0xC0000000 > src

Note: - don’t forget to replace 0xA0000000 with your grep output value. In case it is not found in grep output, save full kernel boot log and search in it

  1. Set destination address for DMA
    For this, execute the following command

lspci -vv | grep -i “region 0”

an example output would be something like

Region 0: Memory at 1f40000000 (32-bit, non-prefetchable) [size=512M]

So, use 1f40000000 as destination address

echo 0x1f40000000 > dst

Note: - don’t forget to replace 0x1f40000000 with your grep output value. In case it is not found in grep output, save full kernel boot log and search in it

  1. Execute write test

cat write

It prints the output in the following format(use ‘dmesg |tail’ to get the output)

tegra-pcie-dw 14100000.pcie_c1_rp: DMA write. Size: 536870912 bytes, Time diff: 316519776 ns

  1. Read test can be performed by interchanging ‘src’ and ‘dst’ and executing

cat read

EP Mode DMA:
Most of steps operate in RP Xavier, except extract information from EP Xavier.

Hi @WayneWWW

For this case, which function needs to be launched on the PCIe-EP side?


I can’t see the same nodes in our Xavier/Orin DevKit. Can you give me more information let us can perform the PCIe DAM test?

Screenshot from 2023-09-01 10-17-55

Hi @WayneWWW

I found a topic below:

and followed the patch to modify the JP5.1.2, but the CONFIG_PCIE_TEGRA_DW_DMA_TEST=y always not effect in kernel_out/.config.

Screenshot from 2023-09-01 15-55-07


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.