GST Pipeline starts dropping frame after some time of processing(Deepstream-5.0)

Setup details:
• Hardware Platform (Jetson Xavier NX Production Module)
• DeepStream SDK 5.0
• JetPack Version: 4.4
• TensorRT Version: 7.1.3-1+cuda10.2
• NVIDIA GPU Driver Version: L4T Driver package (32.4.3)
• Issue Type: Question

We have created a custom deepstream application which creates a pipeline which consumes rtsp live streams and process each frame through 1 primary detector and 3 classifiers (secondary infer). After running for 15 to 20 minutes it gets slow and takes 5 to 6 seconds to process a single frame while initially it process 5-6 frames per second.

We are giving 11 RTSP sources in input, some of them are at 10 FPS and some are at 15 FPS with resolution 2304x1296/ 1920x1080. Muxer properties used are as follows:
{
“UDP_SINK_PORT”: 5403,
“RTSP_OUT_PORT”: 5886,
“PGIE_INTERVAL”: 0,
“MUXER_WIDTH”: 2304,
“MUXER_HEIGHT”: 1296
}
we have set live-stream to 1 in muxer and padding is enabled. The primary infer configuration are as follows:
[property]
net-scale-factor=0.0039215697906911373
model-engine-file= ./Model/resnet18_int8_tlt7.engine
labelfile-path=./Model/resnet18_peoplenet_label.txt
#maintain-aspect-ratio=1
workspace-size=1000
batch-size=1
network-mode=1
process-mode=1
model-color-format=0
num-detected-classes=3
interval=0
gie-unique-id=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid

We have analyzed the time difference between the ntp_timestamp of a frame with the time it reaches in the callback function at the end of the pipeline(without attach-sys-ts flag set). The results are as follows:
after 300 frames
200ms

after 1300 frames
252ms

after 2300 frames
258ms

after 3300 frames
3 min 13 sec 150ms

After 3600 frames
4sec 132ms

after 4300 frames
501ms

The callback function time ranges from <1ms to 3 ms. Can you please suggest what changes should be done to avoid frame dropping. We cannot increase frame-interval in primary detector as it would not fulfill our purpose.

If any other info needed let me know.

Thanks.

Can you monitor GPU loading while run this case? You can use the command “nvidia-smi dmon”.

When the frame processing lag starts the GPU utilization(found using jtop (tegra-stats) as “nvidia-smi dmon” is not available on Jetson GPU) percentage decreases from constant 99% utilization to 80%-90% utilization for a instance and then drops to 0~5%.
RAM utilization was between 5.9GB to 6.5GB out of 7.9GB total RAM.

Are you using deepstream-app? Have you set ‘rtsp-reconnect-interval-sec’ in your config file? Have you monitored the status of rtsp, are the udp packets be received smoothly during your testing?

Yes, we have referred the deepstream-imagedata-multistream sample app to create our application.
I have not set ‘rtsp-reconnect-interval-sec’ currently in config, should it be added into streammux config?
I have utilized the same rtsp over internet via a copy of same deepstream-app (while single rtsp was enabled) simultaneously, in that it was not dropping frames even if the duration of process had exceeded hours.

Hi Fiona,
Any clue? Let me know if any other detail is needed. :)

No clue. RTSP stream may not be that smooth sometimes, it will impact the FPS calculate result.

Gentle Reminder!

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

We can not know the reason just by the description. Are there steps to reproduce the problem? Have you tried the same case with some stable sources such as USB cameras?