we are having big issues getting a reliable response time for tcp communication between a server pc (Windows 10) and a Nano production module (wit Auvidea JN30 Carrier Board Jetpack 4.3). In a test application we are transmitting a message with a certain size in a loop every 100ms to the Nano device. The Nano application immediatly sends this message back to the server.
The time from message transmitted and response received is measured on the server pc:
#Response time for message sent to Jetson Ubuntu client
#message size is in bytes (payload)
size:1 min: 0.94ms max: 1.95ms median: 0.97ms mean: 0.97ms
size:2 min: 0.93ms max: 0.99ms median: 0.96ms mean: 0.82ms
size:4 min: 0.94ms max: 0.98ms median: 0.98ms mean: 0.97ms
size:8 min: 0.92ms max: 0.99ms median: 0.97ms mean: 0.87ms
size:16 min: 0.93ms max: 0.99ms median: 0.98ms mean: 0.97ms
size:32 min: 0.93ms max: 1.00ms median: 0.97ms mean: 0.97ms
size:64 min: 6.78ms max: 371.88ms median: 125.87ms mean: 149.18ms
size:128 min: 14.63ms max: 407.16ms median: 126.03ms mean: 152.62ms
size:256 min: 0.98ms max: 500.45ms median: 51.90ms mean: 112.44ms
size:512 min: 0.95ms max: 508.51ms median: 98.84ms mean: 138.57ms
size:1024 min: 0.92ms max: 560.86ms median: 115.71ms mean: 167.44ms
size:2048 min: 0.96ms max: 637.18ms median: 109.20ms mean: 198.76ms
size:54 min: 0.94ms max: 0.98ms median: 0.98ms mean: 0.97ms
size:55 min: 0.93ms max: 0.99ms median: 0.98ms mean: 0.97ms
size:56 min: 0.93ms max: 1.00ms median: 0.98ms mean: 0.98ms
size:57 min: 0.95ms max: 1.00ms median: 0.98ms mean: 0.97ms
size:58 min: 28.32ms max: 424.25ms median: 110.82ms mean: 146.71ms
size:59 min: 0.93ms max: 396.38ms median: 130.84ms mean: 151.46ms
So a message size of 58 bytes size (and bigger) will cause in issue with the response times.
For testing we exchanged the Nano prod. device and used a different Ubuntu PC and afterwards a Jetson Nano developer board as receiver/responder for the messages. For this cases the big varation of response times does not exist! und is usually around 1-2ms.
What can cause such a behaviour? Is there a magically ethernet setting in L4T to solve our problem? Could it be hardware related?
Happy to hear about your suggestions :)