Note that the volume above is the mounted location of the configurations needed to run(i.e the attached deepstream config as well as the config infer)
The issue I am facing is that the RTSP output sink is slow to start, it’s taking ~10 seconds to establish a connection to the RTSP output on rtsp://localhost:8557/. However, once it starts I am getting real-time FPS! The problem is that the client that’s reading this video stream(VLCj) must start/stop the RTSP stream frequently. If they use the source RTSP the video feed comes up instantly after a start/stop. However, the RTSP generated by the deepstream-app takes ~10 seconds to start which is not acceptable as the client needs the video to start instantly.
To test the RTSP sink output as I have, open VLC with the rtsp stream generated above and start/stop it multiple times to see the delay.
Things that I have tried:
Use gst-launch on the client that is connected to the Jetson via ethernet. Still takes ~10 seconds to start
Turn off OSD and PGIE in the deepstream config and only pipe out the raw rtsp video without inference or on screen display. Still takes ~10 seconds to start.
I’ve tried adjusting the select-rtp-protocol from 4 to 0 for UDP or straight TCP and it doesn’t make a difference.
I’ve tried reflashing the Jetson Orin with the full Jetpack Development packages and running it outside of a docker. Still takes ~10 seconds to start. I would also like to note that we have 3 total Jetson Orin 64 Dev Kits and we are seeing this delay on all 3.
It’s as if there is some sort of handshaking that must go on to start the RTSP streaming to the client and it’s taking way too long to start. I have been trying to solve this problem for days, changing every parameter I can think of in the deepstream config file above without any luck. I’m starting to wonder if there’s code changes that I must do to the reference deepstream-app to make it output RTSP with zero latency. Any ideas?
Yes, I am actually creating a custom Deepstream app based on the reference app and app-commons. Given I am basing the app off of /opt/nvidia/deepstream/deepstream-6.2/apps/sources/sample_apps/deepstream-app Where exactly would I change this at?
I set the iframeinterval to 1 and 5 and the rtsp is stilling taking ~10 seconds to start on the client.
I set export GST_DEBUG=5 on the client that is running gst-launch and I am noticing that it says UDP failed to connect after 5 seconds, retrying with TCP. Check your firewall to make sure the RTP/RTSP is allowed…
So with that said, I allowed all possible RTP/RTSP ports on the client but I am still seeing that message pop up. I have tried gst-launch on a local network Windows 10 machine as well as the other Jetson Orins and I see the same result. Any ideas?
As you suggested I added the protocols=4 and it starts instantly instead of the ~5-10 second wait. It appears that the it attempts to read the RTSP source with UDP first. I am assuming this is the same issue when trying to read the RTSP stream generated from Deepstream? If you look at my deepstream config file above you will see that I have already added the parameter “select-rtp-protocol=4” in [source0], so it should be reading the RTSP source’s RTP packets with TCP only. Is this the same as adding the protocols=4 to the gst-launch command?
Secondly, when Deepstream outputs a sink via RTSP is it outputting it via TCP or UDP?