Memory leak when testing deepstream sample app

can you reproduce the memory leak issue with the origin code(fakesink) and video7-4k-hevc.mp4 in your environment?

This is the result showed by nvmemstat

PID: 1645075   16:20:14	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 23601.2070 MiB	VmRSS: 3164.7812 MiB	RssFile: 687.2383 MiB	RssAnon: 2467.5430 MiB	lsof: 5
PID: 1645075   16:20:15	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 23601.2070 MiB	VmRSS: 3166.1562 MiB	RssFile: 688.6133 MiB	RssAnon: 2467.5430 MiB	lsof: 5
PID: 1645075   16:20:16	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 23601.2070 MiB	VmRSS: 3166.2969 MiB	RssFile: 688.7344 MiB	RssAnon: 2467.5625 MiB	lsof: 5
PID: 1645075   16:20:17	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 23601.6211 MiB	VmRSS: 3167.1875 MiB	RssFile: 689.5352 MiB	RssAnon: 2467.6523 MiB	lsof: 5
PID: 1645075   16:20:18	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 23602.5938 MiB	VmRSS: 3168.1328 MiB	RssFile: 689.7148 MiB	RssAnon: 2468.4180 MiB	lsof: 5
PID: 1645075   16:20:20	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 23603.4531 MiB	VmRSS: 3168.7695 MiB	RssFile: 689.7773 MiB	RssAnon: 2468.9922 MiB	lsof: 5
PID: 1645075   16:20:21	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 23603.4531 MiB	VmRSS: 3168.7773 MiB	RssFile: 689.7773 MiB	RssAnon: 2469.0000 MiB	lsof: 5
PID: 1645075   16:20:22	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 23603.4531 MiB	VmRSS: 3168.8945 MiB	RssFile: 689.8945 MiB	RssAnon: 2469.0000 MiB	lsof: 5
PID: 1645075   16:20:23	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 23603.4531 MiB	VmRSS: 3170.6211 MiB	RssFile: 691.6016 MiB	RssAnon: 2469.0195 MiB	lsof: 5
PID: 1645075   16:20:25	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 23603.4531 MiB	VmRSS: 3171.1016 MiB	RssFile: 692.0781 MiB	RssAnon: 2469.0234 MiB	lsof: 5
PID: 1645075   16:20:26	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 23620.1758 MiB	VmRSS: 3220.6719 MiB	RssFile: 723.9141 MiB	RssAnon: 2486.7578 MiB	lsof: 5
PID: 1645075   16:20:27	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 23621.6758 MiB	VmRSS: 3222.4336 MiB	RssFile: 723.9766 MiB	RssAnon: 2488.4570 MiB	lsof: 5
PID: 1645075   16:20:28	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 23622.3672 MiB	VmRSS: 3223.3203 MiB	RssFile: 723.9766 MiB	RssAnon: 2489.3438 MiB	lsof: 5
PID: 1645075   16:20:29	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 23623.3047 MiB	VmRSS: 3224.0742 MiB	RssFile: 723.9766 MiB	RssAnon: 2490.0977 MiB	lsof: 5
PID: 1645075   16:20:31	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 23624.0547 MiB	VmRSS: 3224.9219 MiB	RssFile: 723.9766 MiB	RssAnon: 2490.9453 MiB	lsof: 5
PID: 1645075   16:20:32	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 23625.9297 MiB	VmRSS: 3226.7383 MiB	RssFile: 723.9766 MiB	RssAnon: 2492.7617 MiB	lsof: 5
PID: 1645075   16:20:33	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 23626.8672 MiB	VmRSS: 3227.6758 MiB	RssFile: 723.9766 MiB	RssAnon: 2493.6992 MiB	lsof: 5
PID: 1645075   16:20:34	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 23627.4297 MiB	VmRSS: 3228.2500 MiB	RssFile: 723.9766 MiB	RssAnon: 2494.2734 MiB	lsof: 5
PID: 1645075   16:20:35	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 23628.9297 MiB	VmRSS: 3229.7109 MiB	RssFile: 723.9766 MiB	RssAnon: 2495.7344 MiB	lsof: 5
PID: 1645075   16:20:37	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 24895.8789 MiB	VmRSS: 3721.3398 MiB	RssFile: 803.2852 MiB	RssAnon: 2503.8945 MiB	lsof: 5
PID: 1645075   16:20:38	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 24896.6484 MiB	VmRSS: 3725.4805 MiB	RssFile: 806.3438 MiB	RssAnon: 2504.9766 MiB	lsof: 5
PID: 1645075   16:20:39	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 24896.6484 MiB	VmRSS: 3726.4648 MiB	RssFile: 806.3438 MiB	RssAnon: 2505.9609 MiB	lsof: 5
PID: 1645075   16:20:40	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 24896.6484 MiB	VmRSS: 3726.9219 MiB	RssFile: 806.4648 MiB	RssAnon: 2506.2969 MiB	lsof: 5
PID: 1645075   16:20:41	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 24896.6484 MiB	VmRSS: 3727.2188 MiB	RssFile: 806.4648 MiB	RssAnon: 2506.5938 MiB	lsof: 5
PID: 1645075   16:20:43	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 24896.6484 MiB	VmRSS: 3727.2383 MiB	RssFile: 806.4648 MiB	RssAnon: 2506.6133 MiB	lsof: 5
PID: 1645075   16:20:44	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 24896.6484 MiB	VmRSS: 3727.2930 MiB	RssFile: 806.4648 MiB	RssAnon: 2506.6680 MiB	lsof: 5

As you can see, in a short period, there is still a 1MB/s leak

  1. As you know, the application will use much memory when loading the model engine, did you start application and nvmemstat at the same time? please start nvmemstat after the application prints “Running…”.
  2. are you testing in DS6.2 docker container now? if still can reproduce, please capture a valgrind log, you might use this command:
    valgrind --leak-check=full --log-file=leak.log ./depstream-image-meta-test 0 file:///home/video7-4k-hevc.mp4

Here is the result after Running …

PID: 1697856   16:51:00	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 34688.0234 MiB	VmRSS: 3721.3867 MiB	RssFile: 803.2852 MiB	RssAnon: 2503.9414 MiB	lsof: 5
PID: 1697856   16:51:01	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 34688.0234 MiB	VmRSS: 3722.1758 MiB	RssFile: 803.6602 MiB	RssAnon: 2504.3555 MiB	lsof: 5
PID: 1697856   16:51:03	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 34752.7930 MiB	VmRSS: 3726.0664 MiB	RssFile: 806.5234 MiB	RssAnon: 2505.3828 MiB	lsof: 5
PID: 1697856   16:51:04	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 34752.7930 MiB	VmRSS: 3726.2109 MiB	RssFile: 806.5234 MiB	RssAnon: 2505.5273 MiB	lsof: 5
PID: 1697856   16:51:05	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 34752.7930 MiB	VmRSS: 3726.2109 MiB	RssFile: 806.5234 MiB	RssAnon: 2505.5273 MiB	lsof: 5
PID: 1697856   16:51:06	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 34752.7930 MiB	VmRSS: 3726.8008 MiB	RssFile: 806.5234 MiB	RssAnon: 2506.1172 MiB	lsof: 5
PID: 1697856   16:51:08	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 34752.7930 MiB	VmRSS: 3726.9844 MiB	RssFile: 806.5234 MiB	RssAnon: 2506.3008 MiB	lsof: 5
PID: 1697856   16:51:09	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 34752.7930 MiB	VmRSS: 3727.0547 MiB	RssFile: 806.5234 MiB	RssAnon: 2506.3711 MiB	lsof: 5
PID: 1697856   16:51:10	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 34752.7930 MiB	VmRSS: 3727.1328 MiB	RssFile: 806.5234 MiB	RssAnon: 2506.4492 MiB	lsof: 5
PID: 1697856   16:51:11	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 34752.7930 MiB	VmRSS: 3727.1367 MiB	RssFile: 806.5234 MiB	RssAnon: 2506.4531 MiB	lsof: 5
PID: 1697856   16:51:12	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 34752.7930 MiB	VmRSS: 3727.2031 MiB	RssFile: 806.5234 MiB	RssAnon: 2506.5195 MiB	lsof: 5
PID: 1697856   16:51:14	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 34752.7930 MiB	VmRSS: 3727.2031 MiB	RssFile: 806.5234 MiB	RssAnon: 2506.5195 MiB	lsof: 5
PID: 1697856   16:51:15	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 34752.7930 MiB	VmRSS: 3727.2031 MiB	RssFile: 806.5234 MiB	RssAnon: 2506.5195 MiB	lsof: 5
PID: 1697856   16:51:16	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 34752.7930 MiB	VmRSS: 3727.9805 MiB	RssFile: 807.0664 MiB	RssAnon: 2506.7539 MiB	lsof: 5
PID: 1697856   16:51:17	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 34752.7930 MiB	VmRSS: 3727.9805 MiB	RssFile: 807.0664 MiB	RssAnon: 2506.7539 MiB	lsof: 5
PID: 1697856   16:51:19	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 34752.7930 MiB	VmRSS: 3727.9805 MiB	RssFile: 807.0664 MiB	RssAnon: 2506.7539 MiB	lsof: 5
PID: 1697856   16:51:20	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 34752.7930 MiB	VmRSS: 3728.0469 MiB	RssFile: 807.0664 MiB	RssAnon: 2506.8203 MiB	lsof: 5
PID: 1697856   16:51:21	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 34177.0820 MiB	VmRSS: 3204.6758 MiB	RssFile: 732.5938 MiB	RssAnon: 2464.0820 MiB	lsof: 5
PID: 1697856   16:51:22	Total used hardware memory: 0.0000 KiB	hardware memory: 0.0000 KiB		VmSize: 0.0000 KiB	VmRSS: 0.0000 KiB	RssFile: 0.0000 KiB	RssAnon: 0.0000 KiB	lsof: 0

I am having trouble with installing valgrind inside docker. It should take a while

The output of valgrind is in the following file.
leak.log (382.4 KB)

Here is valgrind log from my project that have 1MB/s leak

==465696== LEAK SUMMARY:
==465696==    definitely lost: 36,061 bytes in 47 blocks
==465696==    indirectly lost: 21,290 bytes in 29 blocks
==465696==      possibly lost: 355,480 bytes in 2,723 blocks
==465696==    still reachable: 1,466,009,062 bytes in 1,509,753 blocks
==465696==                       of which reachable via heuristic:
==465696==                         stdstring          : 10,912,005 bytes in 183,437 blocks
==465696==                         length64           : 2,456 bytes in 53 blocks
==465696==                         newarray           : 1,904 bytes in 39 blocks
==465696==         suppressed: 0 bytes in 0 blocks
==465696== Reachable blocks (those to which a pointer was found) are not shown.
==465696== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==465696== 
==465696== For lists of detected and suppressed errors, rerun with: -s
==465696== ERROR SUMMARY: 934 errors from 925 contexts (suppressed: 4 from 4)

Definitely lost, indirectly lost and possibly lost are relatively small but still reachable is massive

  1. I tested two pipelines, pipeline1 has memory leak issue,
    , pipeline2 has no memory leak issuetest-h264.txt (18.1 KB).
    pipeline1: gst-launch-1.0 filesrc location=/home/10720p.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! fakesink
    pipeline2: gst-launch-1.0 filesrc location=/home/10720p.h264 ! h264parse ! nvv4l2decoder ! fakesink
    the difference is qtdemux, so testing mp4 file will has memory leak issue.

  2. I replaced create_source_bin with “filesrc+h265parse+nvv4l2decoder” in deepstream-image-meta-test
    deepstream_image_meta_test.c (23.5 KB)
    , and combined 10 video7-4k-hevc.mp4 to one hevc file. then test again, the application processed 17999 frames, there was almost no memory leak, here is the app and memory usage log:
    app-log.txt (1.5 MB)
    app-memory.txt (40.8 KB)
    so nvds_obj_enc_process should be ok.

I also come to the conclusion that the leak issue comes from other elements of gstreamer, probably from rtspsrc. I am trying to use uridecodebin instead of rtspsrc for my pipeline. I tried simple pipleline like this uridecodebin uri=<src> ! nvstreammux ! fakesink and got the following error

Error while setting IOCTL
Invalid control
S_EXT_CTRLS for CUDA_GPU_ID failed
.[16:27:00.168] [warning] Warning: No decoder available for type 'video/x-h265, stream-format=(string)byte-stream, alignment=(string)au, width=(int)3840, height=(int)2160, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true, profile=(string)main, tier=(string)main, level=(string)5.1'.: gsturidecodebin.c(920): unknown_type_cb (): /GstPipeline:face-app/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin

Could you give me a hint where it could be wrong? Notice that I can successfully run the deepstream-image-meta-test with the same pipeline(after modifying the code to fit that pipeline) in the same docker environment

By the way, is there any method to reverse engineering the pipeline that uridecodebin is using under the hood?

do you want to check the content of uridecodebin? uridecodebin is Gstreamer opensource code, you might dump the media pipeline to check, here is the method: DeepStream SDK FAQ - #10 by mchi

1 Like

I mean which equivalent pipeline can be used instead of uridecodebin like you did in your example with filesrc+h265parse+nvv4l2decoder. The method that you suggested only generate uridecodebin as a block without any detail of what it contains. I want to know what pipeline can be used instead of uridecodebin for rtspsrc. I have tried two pipeline

1. rtspsrc ! rtph265depay ! h265parse ! nvv4l2decoder ! nvstreammux ! fakesink
2. uridecodebin ! nvstreammux ! fakesink

The first pipeline has leak issue whereas the second has not. Thats why I want to know what uridecodebin is doing under the hood

By the way, I believe memory leak issue in qtmux have been resolved 3 years ago in this merge. So I suggest you can upgrade gstreamer version in docker environment

thanks for the sharing, Is this still an DeepStream issue to support? Thanks

Now I believe that nvds_obj_enc_process does not cause leak memory. However, I still have heavy leak issues with my project with unknown sources of the problem. I have to investigate more and I will update here if there is anything new or open a new topic. Thank you so much for your support recently.

By the way, in deepstream 6.2, I encounter the segmentation fault core dump when using nvds_add_user_meta_to_obj. The same source code works fine in deepstream 6.1. There are no sample_apps that makes use of nvds_add_user_meta_to_obj in deepstream 6.2. Could you check that out for me please?

could you modify the simplest test1 to reproduce this issue? you might refer to attach_metadata_segmentation in DeepStream SDK,it will call nvds_add_user_meta_to_obj .

I cannot reproduce the problem on the deepstream-test1. Maybe there is something on my part that needs to change to adapt to deepstream6.2

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks

It is no longer a Deepstream to support. I will update if there is something new

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.