Please provide the following info (check/uncheck the boxes after clicking “+ Create Topic”):
Software Version
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other
Target Operating System
Linux
QNX
other
Hardware Platform
NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
other
SDK Manager Version
1.6.0.8170
1.5.1.7815
1.5.0.7774
other
Host Machine Version
native Ubuntu 18.04
other
Hello averyone
I wanted to ask if anyone has made any speed/latency measurements over PCIe and Ethernet?
Is there any recommended way of measuring transfer speed?
And perhaps most importantly, do calculations based on standard PCIe gen 3 transfer speeds of 8 GT/sec or 8 Gbits/sec apply here?
Thanks in advance for your reply
Dear @erick.vieyra,
We are checking internally on this. Do you have any use case that can be shared?
Thanks for your reply @SivaRamaKrishnaNV !
We do not have a specific use case at the moment. Quite the opposite: We are trying to evaluate the capabilities of a Xavier to Xavier comm using PCIe in order how we can architect our system.
Thanks for checking this internally, we are eagerly awaiting for more insights!
Dear @erick.vieyra,
we measure Ethernet (10G) Performance which is hosted on PCIe.
As I understand, the use case is to check Xavier - Xavier communication performance.
We measure 10G(via HSD port) inter Tegra communication. To get the numbers on your side, Could you check the below steps.
-
set MTU:9000 on both Xavier-A and Xavier-B like below using below command.
sudo ifconfig <<enP*p1s0>> mtu 9000 txqueuelen 1000 up
-
Execute "ifconfig [enP*p1s0] " to check whether mtu is set or not.
-
Run iperf3 like below
On Tegra-A: iperf3 -c [Tegra-B ip] -l512k -t 120
On Tegra-B: iperf3 -s
1 Like
Thanks for your reply @SivaRamaKrishnaNV
Those instructions are definitely useful but we are trying to get close to the theoretical limit of a PCIe 4xlane connection.
We followed the steps in Non-Transparent Bridging and PCIe Interface Communication and we are trying to to get higher speed using that configuration. The hardware path that we are using is marked as B in the image attached.
Hello @SivaRamaKrishnaNV
I tried out the new settings and command on the PCIe NTB connection and the results are lower performance than before:
... more results before...
[ 4] 102.00-103.00 sec 215 MBytes 1.81 Gbits/sec 0 2.62 MBytes
[ 4] 103.00-104.01 sec 225 MBytes 1.87 Gbits/sec 0 2.62 MBytes
[ 4] 104.01-105.01 sec 223 MBytes 1.87 Gbits/sec 36 1.83 MBytes
[ 4] 105.01-106.01 sec 220 MBytes 1.85 Gbits/sec 0 1.83 MBytes
[ 4] 106.01-107.01 sec 220 MBytes 1.85 Gbits/sec 0 1.83 MBytes
[ 4] 107.01-108.00 sec 220 MBytes 1.86 Gbits/sec 0 1.83 MBytes
[ 4] 108.00-109.02 sec 225 MBytes 1.86 Gbits/sec 0 1.83 MBytes
[ 4] 109.02-110.00 sec 220 MBytes 1.87 Gbits/sec 0 1.83 MBytes
[ 4] 110.00-111.01 sec 225 MBytes 1.87 Gbits/sec 0 1.83 MBytes
[ 4] 111.01-112.00 sec 220 MBytes 1.87 Gbits/sec 0 1.83 MBytes
[ 4] 112.00-113.01 sec 225 MBytes 1.88 Gbits/sec 0 1.83 MBytes
[ 4] 113.01-114.01 sec 225 MBytes 1.87 Gbits/sec 0 1.83 MBytes
[ 4] 114.01-115.00 sec 220 MBytes 1.87 Gbits/sec 0 1.83 MBytes
[ 4] 115.00-116.02 sec 225 MBytes 1.86 Gbits/sec 0 1.83 MBytes
[ 4] 116.02-117.01 sec 220 MBytes 1.87 Gbits/sec 0 1.83 MBytes
[ 4] 117.01-118.02 sec 225 MBytes 1.87 Gbits/sec 0 1.83 MBytes
[ 4] 118.02-119.00 sec 220 MBytes 1.87 Gbits/sec 0 1.83 MBytes
[ 4] 119.00-120.01 sec 225 MBytes 1.87 Gbits/sec 0 1.83 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-120.01 sec 25.5 GBytes 1.82 Gbits/sec 36 sender
[ 4] 0.00-120.01 sec 25.5 GBytes 1.82 Gbits/sec receiver