Setup details:
• Hardware Platform (Jetson Xavier NX Production Module)
• DeepStream SDK 5.0
• JetPack Version: 4.4
• TensorRT Version: 7.1.3-1+cuda10.2
• NVIDIA GPU Driver Version: L4T Driver package (32.4.3)
• Issue Type: Question
We have created a custom deepstream application which creates a pipeline which consumes rtsp live streams and process each frame through 1 primary detector and 3 classifiers (secondary infer). After running for 15 to 20 minutes it gets slow and takes 5 to 6 seconds to process a single frame while initially it process 5-6 frames per second.
We are giving 11 RTSP sources in input, some of them are at 10 FPS and some are at 15 FPS with resolution 2304x1296/ 1920x1080. Muxer properties used are as follows:
{
“UDP_SINK_PORT”: 5403,
“RTSP_OUT_PORT”: 5886,
“PGIE_INTERVAL”: 0,
“MUXER_WIDTH”: 2304,
“MUXER_HEIGHT”: 1296
}
we have set live-stream to 1 in muxer and padding is enabled. The primary infer configuration are as follows:
[property]
net-scale-factor=0.0039215697906911373
model-engine-file= ./Model/resnet18_int8_tlt7.engine
labelfile-path=./Model/resnet18_peoplenet_label.txt
#maintain-aspect-ratio=1
workspace-size=1000
batch-size=1
network-mode=1
process-mode=1
model-color-format=0
num-detected-classes=3
interval=0
gie-unique-id=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
We have analyzed the time difference between the ntp_timestamp of a frame with the time it reaches in the callback function at the end of the pipeline(without attach-sys-ts flag set). The results are as follows:
after 300 frames
200ms
after 1300 frames
252ms
after 2300 frames
258ms
after 3300 frames
3 min 13 sec 150ms
After 3600 frames
4sec 132ms
after 4300 frames
501ms
The callback function time ranges from <1ms to 3 ms. Can you please suggest what changes should be done to avoid frame dropping. We cannot increase frame-interval in primary detector as it would not fulfill our purpose.
If any other info needed let me know.
Thanks.