• Hardware Platform (Jetson Xavier NX Production Module)
• DeepStream SDK 5.0
• JetPack Version: 4.4
• TensorRT Version: 7.1.3-1+cuda10.2
• NVIDIA GPU Driver Version: L4T Driver package (32.4.3)
• Issue Type: Question
We have created a custom deepstream application which creates a pipeline which consumes rtsp live streams and process each frame through 1 primary detector and 3 classifiers (secondary infer). After running for 15 to 20 minutes it gets slow and takes 5 to 6 seconds to process a single frame while initially it process 5-6 frames per second.
We are giving 11 RTSP sources in input, some of them are at 10 FPS and some are at 15 FPS with resolution 2304x1296/ 1920x1080. Muxer properties used are as follows:
we have set live-stream to 1 in muxer and padding is enabled. The primary infer configuration are as follows:
We have analyzed the time difference between the ntp_timestamp of a frame with the time it reaches in the callback function at the end of the pipeline(without attach-sys-ts flag set). The results are as follows:
after 300 frames
after 1300 frames
after 2300 frames
after 3300 frames
3 min 13 sec 150ms
After 3600 frames
after 4300 frames
The callback function time ranges from <1ms to 3 ms. Can you please suggest what changes should be done to avoid frame dropping. We cannot increase frame-interval in primary detector as it would not fulfill our purpose.
If any other info needed let me know.