[Q] Drive PX2 AutoChauffeur 10GbE performance

Dear,

I tested 10GbE of Drive PX2 AutoChauffeur. But throughput could not reach what I expected.
To test, I connected TWO 10GbE ports of each D.PX2 with CAT 7 cable.
Through

iperf3

commands, TCP/UDP packet tested done.

D.PX2 1:

% iperf3 -s

D.PX2 2:

% iperf3 -c 192.168.20.91 -t 100 -V

I got following a little poor result;

TCP: under 4Gbps  
  UDP: approx. 1Gbps

Even though some network parameters tuning described in

linux-netperf.txt

,
performance did not improved.

Any kind of advice would be appreciated in advance,

Thanks

Dear ddpx2,

This is HOST - DPX2 10G test process and setup.
Would you like to do the following? Thanks.

  1. Flash the DUT with Driveinstall 5.0.10.3 build
  2. Once it boots, do “sudo apt-get update” on Tegra-A
  3. Install iperf3 binary using command “sudo apt-get install iperf3”
  4. Set the MTU size to jumbo on both the Host and Tegra-A with the below command :
    sudo ifconfig <> mtu 9000 txqueuelen 1000 up
  5. Collect required throughput numbers

Setup:

  1. Host is configured as DHCP server .Followed link : 5.x_Linux_DPX_SDK
  2. Ethernet cable connected from Host 10G port to DUT 10G Port
  3. Confirmed the 10G-Eth interface by issuing the command lspc

Dear Steve,

Very long time has passed to test.

I connected two Drive PX2 which have 5.0.10.3 respectively.
And configured MTU as you advised.
In short, throughput has increased from 4.0x Gbps to 7.4 Gbps with TCP protocol.

An interesting issue occurs here, parallel sessions.
When I use multiple sessions, the throughput drops sharply to around 5.x Gbps.

Testing:
2 Drive PX2 Autochauffeur - 1:1 connection with CAT 7 FTP cable
SW: iperf3
server:

% iperf3 -s

client:

% iperf3 -c <server ip address> -t 100 -N

Very Thanks Steve,

Dear Steve,

We want to use 10GbE interface in order to store data in a NAS server. I followed this topic and installed 5.0.10.3 enabled jumbo frames on the both side and edited /etc/sysctl.conf as Data Logging topic says:

net.core.rmem_default = 1048576
net.core.rmem_max = 10485760
net.core.wmem_default = 1048576
net.core.wmem_max = 10485760
net.core.netdev_max_backlog = 30000
net.ipv4.ipfrag_high_thresh = 8388608
net.ipv6.conf.all.disable_ipv6 = 1
vm.dirty_background_ratio = 5
vm.dirty_ratio = 80

When I test the connection with iperf3, the throughput can not exceed 5Gbits/sec. How can I increase the throughput? Thank you

Dear Mert.colak,

Would you make sure you use CAT 6 or later version cable?

Thanks,

Dear ddpx2,

Yes, I’m using CAT 6 UTP cable. Also when I run the performance test in reverse mode:

iperf3 -c <server ip address> -t 100 -N -R

The result is 6.6 Gbits/s.

Dear Mert.colak,

As I did not use NAS as a peer, I cannot say what’s wrong in a word.

When I test A.Chauffeur, I connected 2 x A.Chauffeurs directly through CAT 7 cable.

  • Would you test it again with CAT 7 cable?
  • And two sides are connected directly?

THanks,