But other than that we must have a different understanding of what “non acceptable latency” is.
I was able to run your two pipelines on my Jetson Nano and there is definitely a higher latency with the nvstreamdemux pipeline. Not factor 5 but at least 3 times higher.
I think, if we don’t agree on the fact, that the nvstreamdemux introduces, additional, not negligible latency, we won’t make any progress in this matter and this solution is ruled out for me.
I’m pushing a NY video from my PC to the RTSP server. The two inference pipelines - the one without nvstreamdemux and with lower latency and the other - are pulling the video from the RTSP server and pushing the inference results back as another stream.
Then on my PC I’m pulling both - the original and the annotated stream - and displaying it side by side
I have now enabled display-clock for nvdosd. With both pipelines I see a general latency of 1 s, which would be ok. One second means: I have a real-time clock aside here and my camera image going through the inference. The timestamp in the output video of the inference is 1 second behind. Good so far and completely acceptable.
When I for instance raise my hand in the (real-time) second 35 of a minute, then the inference display shows that move with exactly the timestamp 35 one second later. But only, if nvstreamdemux is not in the pipeline. So the timestamped video really shows the real time situation at that time. Perfect.
But with nvstreamdemux in the pipeline, the movement is shown after a delay of totally 3 seconds and the timestamp in the video is 38 for the movement. Note: The total timestamp difference between the video seen and the real time clock is still 1 second, it is just so, that the video - correctly timestamped - is in reality 2 seconds older.
Conclusion: The demuxer is somehow needing additional 2 seconds to feed the video to the nvdsod, where it gets timestamped correctly. But it has been artifically aged. :)
And there is no configurable way around. This component is useless for real-time applications.
Due to the weekend and the time difference, we may be slow to reply. Referring to the pipeline you provided, we are trying to reproduce this issue on our side and analyzing it further.
On my side, according to the pipeline you provided, the latency is about 1s with and without the nvstreamdemux plugin.
left: with Nvstreamdemux right:without nvstreamdemux
I was using ffplay for consumtion, but even with a gst-launch pipeline same results. 2 seconds. That’s not new. I also don’t believe that VLC would improve that.
STEP 2: I will now go and replace the RTSP server with the one you suggested, but I fear, that will not change anything. I’m sure your DS installation is in some unknown way different from mine.
Yes, as anticipated: Replacing the RTSP server by an older version of it and the docker version didn’t change the situation. I just tested the problematic pipeline: 2 seconds behind.
Created NGC account, pulled the docker image you told me, did install the additional drivers by /opt/nvidia/deepstream/deepstream/user_additional_install.sh, re-compiled GStreamer for the RTSP EoS issue by “update_rtpmanager.sh” at /opt/nvidia/deepstream/deepstream/
Ran the pipeline with the nvstreamdemux at the docker prompt.
And all I got was what I already had: 2 seconds latency.
Whatsoever, there is definitely a difference between with and without nvstreamdemux. But I suppose, you are not going to see or accept that, so I’m getting used to the thought to forget about this way of sharing a GPU…