rel_28.2.1
The sender TX2 is generating an h.264 stream using
gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, format=UYVY, width=1920, height=1080, framerate=60/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=NV12' ! omxh264enc ! 'video/x-h264, stream-format=(string)byte-stream' ! h264parse ! mpegtsmux alignment=7 ! udpsink host=x.y.z.a port=n sync=false ttl-mc=6
A receiver TX2i is displaying the stream using
gst-launch-1.0 udpsrc port=n address=x.y.z.a ! tsdemux ! queue max-size-time=0 max-size-bytes=0 max-size-buffers=0 ! h264parse ! omxh264dec ! nvoverlaysink sync=false
Where IGMP group address and port are masked
The latency is measured using a Raspberry Pi with two light sensors that are driven by a flashing block in the source video and then the display monitor. This provides an reasonably accurate measurement every second.
If the receiver is started before the sender then the latency is small and stable, at about 160 milliseconds
If the sender is started before the receiver then the latency starts at about 280 milliseconds and slowly decreases to 160 milliseconds, over about a minute.
I have tried setting iframeinterval=10 and insert-sps-pps=true, without effect.
Changing the queue parameters and even removing the queue has no effect.
Clocks are default
This post is probably relevant? https://devtalk.nvidia.com/default/topic/1032771/jetson-tx2/no-encoder-perfomance-improvement-before-after-jetson_clocks-sh/post/5255605/#5255605