Default Deepstream app causing memory leaks!

Please provide complete information as applicable to your setup.

  • Hardware Platform (Jetson / GPU)=GPU NVIDIA GEFORCE RTX 2060
  • TensorRT Version=7.0
  • NVIDIA GPU Driver Version (valid for GPU only):450.102
  • Issue Type( questions, new requirements, bugs)=questions
  • GCC=7.5
  • PYTHON 3.7
  • CUDNN 7.6.5
  • CUDA 10.2

I have tried running Valgrind to detect memory leaks on my application but I am getting memory leaks even with the default deepstream app!!
valgrind --tool=memcheck --leak-check=full --show-leak-kinds=all --track-origins=yes --log-file=valgrind-out.log deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt

LEAK SUMMARY:
==4980== definitely lost: 24,507 bytes in 54 blocks
==4980== indirectly lost: 181,887 bytes in 198 blocks
==4980== possibly lost: 99,902 bytes in 782 blocks
==4980== still reachable: 418,700,107 bytes in 338,344 blocks
==4980== of which reachable via heuristic:
==4980== stdstring : 111,779 bytes in 2,242 blocks
==4980== length64 : 1,984 bytes in 34 blocks
==4980== newarray : 4,208 bytes in 28 blocks
==4980== suppressed: 0 bytes in 0 blocks
==4980==
==4980== For lists of detected and suppressed errors, rerun with: -s
==4980== ERROR SUMMARY: 305 errors from 246 contexts (suppressed: 0 from 0)

Please respond

Valgrind does not provide good results with GLib out of the box. Try this:

But I have experienced a memory shoot in jetson nano after the default deepstream app running finished. Initially the RAM consumption were ~450MB but after running default deepstream app got finished it settles at ~950MB. Thats why I tried with valgrind to check on memory leaks