Hello Everyone!
I hooked up a Dahua 4mp IP Camera to my Jetson Nano through a POE Switch and there is some serious lag on the stream; what’s worse is that the lag keeps increasing with time. At first I though it was a memory resource issue, but I checked the memory (using the jtop command) and more than a GB of memory stays free when I play the rtsp stream (on the browser). I have tried everything under the sun - the ‘sudo jetson_clocks’ command, low frame rate, low resolution - but the lag is still there. Does anyone know what the problem could be? I know there is nothing wrong with the camera since there is no lag when I play the rtsp stream on my laptop on high settings. Your help would be most appreciated.
Hi,
Do you run gstreamer or jetson_multimedia_api? And please share the release version($ head -1 /etc/nv_tegra_release).
R32 (release), REVISION: 4.4, GCID: 23942405, BOARD: t210ref, EABI: aarch64, DATE: Fri Oct 16 19:44:43 UTC 2020
I am currently running the rtsp stream on a browser (Chromium).
Hi,
Hardware decoding is not enabled in Chromium, so the performance is dominated by CPU capability. Please run sudo nvpmodel -m 0 and sudo jetson_clocks. This runs CPUs at max clock and may bring improvement.
Would suggest use gstreamer or jetson_multimedia_api to leverage hardware decoding.
When I run the the deepstream-imagedata-multistream application, I get the same lag as on the browser.
Example of low-latency playback with with gst-launch-1.0 to play RTSP video stream with hardware acceleration (please replace the RTSP address with yours):
gst-launch-1.0 rtspsrc location=rtsp://169.254.160.104:554/av0_0 latency=0 drop-on-latency=true max-size-buffers=0 ! decodebin ! nvoverlaysink sync=false -e
Similar low-latency gstreamer pipeline can be used with OpenCV, for example:
import cv2
camera = cv2.VideoCapture("rtspsrc location=rtsp://169.254.160.104/av0_1 latency=0 drop-on-latency=true max-size-buffers=0 ! decodebin ! nvvidconv ! video/x-raw, format=I420 ! appsink sync=0", cv2.CAP_GSTREAMER)
If video playback is laggy, you can use jtop to check if NVDEC (hardware video decoding) is used. If not then something is wrong. As far as I know, all browsers use software video decoding, so you cannot use them if you want low latency, especially at high resolution.
Alternatively you can use mpv with enabled hardware video decoding (you can either install prebuilt deb files or build yourself). Version of mpv in Ubuntu 18.04 does not support low-latency profile, so I had to use the following options to achieve the best best latency (within 0.1-0.2s range depending on IP camera and stream resolution):
mpv --no-cache --audio-buffer=0 --vd-lavc-threads=1 --cache-pause=no --no-audio --demuxer-lavf-probe-info=no --demuxer-lavf-analyzeduration=0.0 --video-sync=audio --interpolation=no --keep-open-pause=no --untimed --rtsp-transport=tcp rtsp://192.168.1.10/1
I am getting the following output on the terminal when I run the gstreamer pipeline you suggested. The output gets stuck on “Waiting for EOS…”.
“gst-launch-1.0 rtspsrc location=rtsp://admin:abc12345@192.168.1.108:554 latency=0 drop-on-latency=true max-size-buffers=0 ! decodebin ! nvoverlaysink sync=false -e”
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://admin:abc12345@192.168.1.108:554
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (request) SETUP stream 1
Progress: (open) Opened Stream
Setting pipeline to PLAYING …
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request
WARNING: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0: Delayed linking failed.
Additional debug info:
./grammar.y(510): gst_parse_no_more_pads (): /GstPipeline:pipeline0/GstDecodeBin:decodebin0:
failed delayed linking some pad of GstDecodeBin named decodebin0 to some pad of GstNvOverlaySink-nvoverlaysink named nvoverlaysink-nvoverlaysink0
ERROR: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0/GstUDPSrc:udpsrc1: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0/GstUDPSrc:udpsrc1:
streaming stopped, reason not-linked (-1)
EOS on shutdown enabled – waiting for EOS after Error
Waiting for EOS…
The error like this never happened for me, so I’m not sure what’s wrong; the pipeline I have suggested works for me with many different models of IP cameras with H264 and HEVC video streams.
Have you tested the RTSP address in some other application (except browser) to make sure it is correct? Not all cameras have default video stream, most IP cameras I have require full RTSP path ending with something like /1
or /av0_0
, or some complicated string to choose a video stream. Even if in some application partial RTSP address seems to work, I suggest to use full RTSP address with gst-launch-1.0. Usually IP cameras have multiple video streams, at least 2-3, with different addresses and resolutions.
If even with full RTSP address nothing changes, I suggest to google the “ERROR: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0/GstUDPSrc:udpsrc1: Internal data stream error.
”. Since I cannot reproduce the issue, I cannot try suggestions specific to the error.
Hi,
You can try uridecodebin:
$ gst-launch-1.0 uridecodebin uri='rtsp://admin:abc12345@192.168.1.108:554' ! nvoverlaysink sync=0
If yo can launch it successfully with uridecodebin, it should work fine in DeepStream SDK.