Deepstream 5.1 Memory Leak in NvOSD_TextParams Objects

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU) Jetson Xavier NX
**• DeepStream Version 5.1
**• JetPack Version (valid for Jetson only) 4.6
**• TensorRT Version 7
**• Issue Type: bugs
**• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I found a memory leak when running the following pipeline (deepstream 5.1):

Notes:

  1. Inference information is saved in a global variable in inference_saver_probe
  2. Edge detection destroys batch_meta from the gst buffer
  3. Inference information is added (via draw_probe) to a new batch_meta created by streammux.
  4. draw_probe adds new obj_meta and display_meta objects like in deepstream_test_2.py and deepstream_ssd_parser.py:
obj_meta = pyds.nvds_acquire_obj_meta_from_pool(batch_meta)
update obj_meta
pyds.nvds_add_obj_meta_to_frame(frame_meta, obj_meta, None)

display_meta = pyds.nvds_acquire_display_meta_from_pool(batch_meta)
update display_meta
pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)

Memory Leak:

After multiple tests I narrowed down the source of the memory leak to the text_params object. Specifically, setting the display_text variable would allocate memory that it is never collected.
Screenshot from 2022-12-05 12-23-44

  • When I ran the pipeline setting the display_text I got the following memory consumption:


Basically there’s no memory leak when I run the pipeline WITHOUT NvOSD_TextParams objects.

I have this code running on hundreds of devices. I cannot just migrate them to Deepstream 6 (DS6). Also, I read in multiple posts that this bug may also be in DS6.

Please assist

typedef struct _NvOSD_TextParams {
char * display_text; /**< Holds the text to be overlayed. */
thanks for your sharing, python samples will use deepstream C SDK by python binding, as the code shown, display_text is a pointer, users need to free it if want to modify it, it will be freed by SDK when meta is destroyed.
could you simplify deepstream sample to reproduce this issue? thanks

Hi

Find a sample code that reproduces the problem attached. I have removed multiple things from it to simplify it.

mem_leak_sample_code.py (5.4 KB)

The actual code includes more objects that add more text to the image but you can see the memory leak in this simplified version too. As shown in the following image:

how to run this code? where to place this code? can you simplify the pipeline because of error "no element “edgedetect” "?

Find an even simpler version of the code attached.

mem_leak_sample2.zip (5.8 KB)

  • I replaced the rtspsrc with videotestsrc.
  • I removed the edgedetect element.
  • I ran the sample app for several hours to replicate the issue. See the following image for memory consumption:

how to run this code?

  1. unzip mem_leak_sample2.zip.
  2. update config_infer_primary_yolo.txt. Use any model you want. I cannot share my model/engine as it is part of my companie’s IP. update the lines with < UPDATE > on them.
  3. run sample script.

python3 mem_leak_sample2.py

what to place this code?
I don’t understand the question

  1. In C code, I have confirmed that display_text will freed after meta is destroyed. that pyds.free_buffer(text_params.display_text) will not be called, you can add logs before it.
  2. noticing your pipeline is complex, can you use deepstream-test1.py to test this issue, which has similar code “display_text =”.

I ran the deepstream-test1.py sample script and I was able to reproduce the problem. I had to modify the script so the pipeline can run indefinitely. Here are the changes I made:

  1. Replaced filesrc with multifilesrc.
  2. I used a video that does not have a defined “Media length” to allow multifilesrc to loop the video forever. This is the video (1.2 MB) I used.
  3. Replaced nveglglessink with fakesink, since our devices do not have a display.

Here’s the memory consumption with additional display_meta objects :

I also ran the code without creating display_meta objects. I removed the following line: osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

Here’s the memory consumption without display_meta objects.

We can clearly see in the images that there’s a memory leak in the first approach and that there’s none in the second one.

I replaced nveglglessink with fakesink, used ffmpeg to combine 10 sample_720p.h264 to a long video sample_720p_x10.h264, after testing again I did not see obvious memory leak,from the log, VmRSS kept 1018.4531 MiB for a log time, if there is memory leak, this value will continue to go up. here is the code and report.
deepstream_test_1.py (9.8 KB)
log.txt (28.5 KB)

here is my memory leak test script, nvmemstat.py DeepStream SDK FAQ - #14 by mchi, btw, how did you monitor memory leak?

I looked at your logs and it looks like you only ran the code for for a little over 2 mins. In order to see the memory leak I had to run it for hours (see the pictures I have sent so far). Please run the code for at least 3 hours to see the memory trend. Also switch to multifilesrc and use the video uploaded before to run the pipeline indefinitely.

I used the memory-profiler library to profile the memory consumption

This is the command I used to run the code:
mprof run --include-children --multiprocess --output mem_leak_sample4.dat python3 deepstream_test_1.py h264.h264

please share your modified deepstream-test1.py and configuration file, thanks!

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.