Request a Application Dome of PCIe communication

Hi,

According to the documents “JETSON_AGX_Xavier_PCIE_ENDPOINT_SOFTWARE_FOR_L4T.pdf” and “JETSON_AGX_Xavier_PCIE_ENDPOINT_Design_Guidelines.pdf”, I used two Xavier to communicate with PCIE, and it was successful.

Now I would like to ask if there is any application Demo on Linux that can operate the PCIE device for communication?

Also, is it possible to have both “pci_epf_nv_test” and “pci_epf_tvnet” modes?

BR,

You can try emulating virtual ethernet over PCIe connection.
Please refer to the documentation @ https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide/xavier_PCIe_endpoint_mode.html#wwpID0ETHA

Hi vidyas,

Thanks for your reply!
Yes,we’ve already done that.

We found the transmission to be too slow and easy to disconnect.We used the SCP command to transfer a large file (1.5GB), but found that the transfer rate was only 70MB/s. I would like to ask for your advice:

  1. Is this transmission rate normal?
  2. How to set the speed up?

Thanks!

Instead of scp, could you please try iperf and give us the numbers? you are supposed to get around 1Gbps with the default release and around 5Gbps with the attached patch set.T2T_VETH_PATCH.zip (18.6 KB)

Hi vidyas,

Thanks for your replay!
 We will test with IPerf and give feedback as soon as possible.

BR,

Hi vidyas,
We used the Jetpack4.4 kernel code,4.9.140-tegra,is the attached patch pack suitable for this release? Seems newer than Patch’s code.

Your patch:

 --- a/drivers/net/ethernet/nvidia/pcie/tegra_vnet.c
+++ b/drivers/net/ethernet/nvidia/pcie/tegra_vnet.c
@@ -24,6 +24,8 @@
 /* Network link timeout 5 sec */
 #define LINK_TIMEOUT 5000
 
+#define TVNET_NAPI_WEIGHT	64
+
 #define RING_COUNT 256

We code in tegra_vnet.h:

/* Network link timeout 5 sec */
#define LINK_TIMEOUT 5000

#define TVNET_DEFAULT_MTU 64512
#define TVNET_MIN_MTU 68
#define TVNET_MAX_MTU TVNET_DEFAULT_MTU

#define TVNET_NAPI_WEIGHT	64

#define RING_COUNT 256

/* Allocate 100% extra desc to handle the drift between empty & full buffer */
#define DMA_DESC_COUNT (2 * RING_COUNT)

BR,

Hi vidyas,

Our configuration is:

cd /sys/kernel/config/pci_ep/
mkdir functions/pci_epf_tvnet/ethpcie
echo 16 > functions/pci_epf_tvnet/ethpcie/msi_interrupts
ln -s functions/pci_epf_tvnet/ethpcie controllers/141a0000.pcie_ep/
echo 1 > controllers/141a0000.pcie_ep/start

Instead of func1, use ethpcie, Is that okay?

Thanks,

BR,

Yes. That should be fine.

Hi vidyas,

Thanks very much!

we test with iperf:
endpoint : 192.168.66.7
command: sudo iperf3 -s -i 1 -p 50000

root prot : 192.168.66.6
command: iperf3 -c 192.168.10.122 -t 5 -b 1G -P 1 -p 50000 -R

The test results are shown in the figure below:

and:
endpoint : 192.168.66.7
command: sudo iperf3 -s -i 1 -p 50000

root prot : 192.168.66.6
command: iperf3 -c 192.168.10.122 -t 5 -b 5G -P 1 -p 50000 -R

the result:

Hi,

May I ask if my above test result is normal?

In addition, we use TCP server and client can transfer large files (such as more than 1G), but with UDP way can not be successfully transferred, will be stuck, how to solve this problem?

Thanks,
BR,

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Your TCP perf numbers look fine.
Regarding the UDP part, no system hang is expected.
Could you please try with the below iperf command line?
iperf3 -c 192.168.1.1 -u -b 5g -l 65507