High RAM consumption in deepstream 6.0.1

Hello everyone
I’m currently using a Jetson Nano Developer Kit with Deepstream version 6.0.1. My current pipeline includes the following elements: a source, nvstreammux, mux_queue, nvinfer(pgie), a queue, nvinfer(sgie), nvtracker, another queue, nvdsanalytics, yet another queue, nvvideoconvert, and more queues, nvdsosd, another nvvideoconvert, and a Tee. There are also two other pipelines with elements like capfilter, encoder, rtppay, upsink, and nvvideoconvert.

Src —> nvstreammux —> mux_queue —> nvinfer(pgie) —> queue —> nvinfer(sgie) —> nvtracker —> queue —> nvdsanalytics —> queue —> nvvideoconvert —> queue —> nvdsosd —> nvvideoconvert —> Tee

queue —> capfilter —> encoder —> rtppay —> upsink

queue —> nvvideoconvert —> capfilter —> fakesink

My environment is running Ubuntu 18.04, and I have pre-installed versions of TensorRT, CUDA, and CUDNN. I recently switched from Deepstream 5.1 to 6.0.1, and I’ve noticed that the RAM consumption of my program has increased from 730MB(in Deepstream 5.1 ) to 1.4GB( Deepstream 6.0.1 ). There have been no changes in my pipeline except for using nvtacker and nvdsanalytics was included in Deepstream 6.0.1.

TensorRT Version : 8.2.1.9
CUDA Version: 10.2.300
CUDNN Version: 8.2.1.32
Operating System + Version: Ububntu 18.04
Python Version (if applicable): 3.6.9
Jetpack 4.6.1 [L4T 32.7.3]
Architecture: aarch64
Model: NVIDIA Jetson Nano Developer Kit

I’m using pgie for number plate detection and sgie for OCR extraction, and I’m getting my input from a camera’s RTSP link. I’m using ONNX model files that are automatically converted to engine files when the program runs. Additionally, I’m using Redis for data gathering in sgie and analytics purposes.

The high RAM consumption is causing the device to go out of memory every few hours. Can you please provide a solution for this issue?

In the above setup the jetson nano was flashed using Nvidia sdk Manager