• Hardware Platform: Jetson NX
• DeepStream Version: 5.0
• JetPack Version: 4.4
• TensorRT Version: 7.1.0
Currently, I am trying to connect two cameras to one edge device (NX). Camera resolution is 1080P, with 15 FPS.
Cameras field of view are toward one object. (environment is not very complex. Maximum like 10 objects are inside FOV).
Problem we see is: after Deepstream runs for 5-8 hours without system rebooting, one camera will show around 5s delay/latency to the other camera.
This is wired. From video local display, we see the video stream is fluently in most cases. Thus, I am confused where this delay coming from.
I am using Python bindings for development. All the software version are up-to-date.
Here follows the pipeline with settings:
uridecodebin (rtsp video source) [same as SDK example][ I did NOT set property uri_decode_bin.set_property(“buffer-duration”,1) , uri_decode_bin.set_property(“buffer-size”,1). Will it induce delay? ]->
streammux [‘live-source’ is set 1, ‘streammux_timeout’ is set 4000000]->
pgie [Here we use Resnet10, which is the default pre-trained model provided by Nvidia]->
filter [video/x-raw(memory:NVMM), format=RGBA] →
nvvideoconvert - >
tee → queue1 → rtsp output
→ queue2 → video file output
→ queue3 → local display → nvegltransform → nveglglessink [I did **NOT** set property sync as false. Will it induce delay?]
Also, is there any logs of where I may find out the root cause?
Thanks a lot for your help.