I’m currently tuning a pair of gstreamer pipelines involving a Jetson TX2 sending and receiving audio and video. I’m trying to track down what’s causing graphics artifacts on the stream coming from the TX2 where a camera streaming h264 (natively generating the h264 stream, the TX2 is only doing the payloading and sending) is capturing and streaming via rtp. Performance on this transmission stream is acceptable until the video receive pipeline is brought on line, at that point the transmission stream’s video quality drops precipitously.
I’ve set up a dynamic gstreamer pipeline on the receiving desktop that logs debug statements and utilizes a rtpjitterbuffer with 0 latency to monitor stats on the stream the TX2 is transmitting.
Conventional gstreamer wisdom is that adding a queue and/or a rtpjitterbuffer to the receiving desktop pipeline should ameliorate some of the issues. Adding and removing queues and jitterbuffers with various settings results in an average of about 100 packets lost per 100k packets pushed over 12 different tests. Needless to say video quality doesn’t improve since without queues or jitterbuffers the average lost packets is 95 per 100k.
Now, in defiance of conventional wisdom and documentation, I’ve also tested placing queues and jitterbuffers on the TX2 transmission side pipeline and discovered dramatic improvement. The packet loss on the receiving side is cut in half by the simple addition of a queue (52 per 100k), and when combined with a jitterbuffer (even with the latency set to 0) the packet loss drops to 0 per 100k.
The question is: Why? While it’s great having a solution that completely mitigates the issue I’m seeing, I also need an plausible explanation to bolster support for this solution. Does anybody have ideas? Perhaps, places to look on the TX2 side for evidence?
(also posted on stack overflow)