Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson Nano • DeepStream Version 6 • JetPack Version (valid for Jetson only) 4.6 • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs)
Lag/decoding issues when using live RTSP camera as a source. • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
I am working in a traffic signals project where I am using RTSP streams from security cameras installed at traffic signals to detect objects (cars).
I am running deepstream python app for this. (code from Github attached):
When I run the program on recorded videos, it works fine. But when running it on live stream, video output is weird.
In the following screen record, I have opened jetson in NoMachine on my laptop so that you can see video stream is running perfect on my laptop (using smartPSS software) but the same stream is buffering and lagging in jetson window. 2022-07-29 11-18-43.mkv (16.4 MB)
Same thing happens with running 1 or 2 stream alone.
is there network issue? please capture packets on jetson to check.
if network is ok, need to check the decoding issue, you can use this command to record rtsp source to h264. gst-launch-1.0 uridecodebin uri=rtsp://xxx ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=I420’ ! nvv4l2h264enc bitrate=1000000 ! filesink location=/home/nvidia/bp.264
in your “2022-07-29 11-18-43.mkv”, how do your see that three videos? I did not see “python3” caption, is it a thirdpart tool? if yes, it should be network issue,
you can start four commandline to verify,
gst-launch-1.0 uridecodebin uri=rtsp://xxx1 ! nvvideoconvert ! autovideosink
gst-launch-1.0 uridecodebin uri=rtsp://xxx2 ! nvvideoconvert ! autovideosink
if no network issue, need to check performance issue because there are HEVC decoding, h264 encoding and inference, please monitor the CPU/GPU usage when test, you can run four HEVC videos to verify. you can use /opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4
do you mean running 4 local hevc video is ok? like this, python3 deepstream_imagedata-multistream.py file:///hevc1 file:///hevc2 file:///hevc3 file:///hevc4 frames, if that, it should not performance issue.
does it have mosic at the start time if run 4 rtsp hevc videos? I suppose it is network issue or stream issue.
can you see video if run one gst-launch-1.0 uridecodebin uri=rtsp://xxx1 ! nvvideoconvert ! autovideosink? you can use " ulimit -n 2048" to fix that "Too many open files " problem.
It happens when Deepstream has to interact over local network.
I used rtspout.py example to generate a stream. The stream when played on jetson, works fine. But when playing same stream on another computer on the network, it doesn’t play well.
could you elaborate “It happens when Deepstream has to interact over local network.”？
about “I used rtspout.py example to generate a stream. The stream when played on jetson, works fine.”, how do you played it ? about “when playing same stream on another computer on the network, it doesn’t play well.” what the difference between the network?
about “when I run deepstream on RTSP stream generated from jetson itself, it produces very less mosic.” did you run four rtsp stream?
from the points 2,3, it should be source receiving issue because the only one difference is video source, please capture packets on jetson to check. you can use gstreamer to check 4 sources receiving at the same time , here is command: gst-launch-1.0 rtspsrc location=rtsp://ip/url ! rtph265depay ! h265parse ! mp4mux ! filesink location=file.mp4