Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson Nano) • DeepStream Version 5.0 GA • JetPack Version (4.4) • TensorRT Version 7+ • Issue Type(bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
We noticed the deepstream-app was continually consuming memory (as seen using ‘top’), with RES Resident/Physical memory going up by about 30-40MB per hour sometimes even more. So, we tried to isolate the issue and believe that enabling RTSP sink cause this. We disabled each of the plugin/feature one by one and ran the deepstream-app application. It was easily reproducible.
Run deepstream-app with the following options in the config file ds-config.txt (4.1 KB) :
When the application was executed with No Sinks (i.e disabled) or when one Fakesink was enabled, then there was no memory leak.
When 3-4 RTSP sources and sinks are enabled, the deepstream-app crashes after 8-12 hours of operation (as memory is lost). This is why we started to suspect memory leaks and zeroed it down as explained earler. I hope NVIDIA can replicate this easily and provide some inputs/fix.
Has anyone else noticed memory leak with any kind of sinks?
One more observation: We also disabled ALL callbacks from the pipeline (when creating the pipeline) along with all other major features/plugins listed in earlier post. Memory was being consumed at 40MB+ per hour even after disabling all callbacks.
HI,
Let’s first be in same page,
First case, one rtsp camera source, no sink;
No mem leak in one hour, after three minutes, it will be stable at one value, see log firstcase.log firstcase.log (3.5 KB)
Second case, one rtsp camera source, rtsp sink, see config as attached ds-config.txt (4.1 KB) ;
there mem leak, first hour, 588kb, second hour 872kb, third hour 356kb, not huge leak like you posted 30-40MB/h, details log see 128_2020-10-10_12-31-45.log, 128_2020-10-10_12-31-45.log (9.2 KB) using script to get the RSS mem used, dump_RSS_mem_runningtime_pid.sh, dump_RSS_mem_runningtime_pid.sh.log (365 Bytes)
We will continue look into and update if any findings.
Thanks for looking into this. Just to be sure, this was observed on two different Jetson Nano Development Kit running DS 5.0 GA release (not on any Desktop system). We assume this is the same configuration that was used to reproduce the issue?
The only difference I see in the config file is the sources are different types. You may please try with file as a source,
The data that we captured is below. We used ‘top’ to monitor the memory. I hope that is sufficient to provide a high level summary of the memory consumption over a period.
At the start of the application: