Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU) Jetson Xavier NX
**• DeepStream Version 5.1
**• JetPack Version (valid for Jetson only) 4.6
**• TensorRT Version 7
**• Issue Type: bugs
**• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I found a memory leak when running the following pipeline (deepstream 5.1):
After multiple tests I narrowed down the source of the memory leak to the text_params object. Specifically, setting the display_text variable would allocate memory that it is never collected.
When I ran the pipeline setting the display_text I got the following memory consumption:
Basically there’s no memory leak when I run the pipeline WITHOUT NvOSD_TextParams objects.
I have this code running on hundreds of devices. I cannot just migrate them to Deepstream 6 (DS6). Also, I read in multiple posts that this bug may also be in DS6.
typedef struct _NvOSD_TextParams {
char * display_text; /**< Holds the text to be overlayed. */
thanks for your sharing, python samples will use deepstream C SDK by python binding, as the code shown, display_text is a pointer, users need to free it if want to modify it, it will be freed by SDK when meta is destroyed.
could you simplify deepstream sample to reproduce this issue? thanks
The actual code includes more objects that add more text to the image but you can see the memory leak in this simplified version too. As shown in the following image:
update config_infer_primary_yolo.txt. Use any model you want. I cannot share my model/engine as it is part of my companie’s IP. update the lines with < UPDATE > on them.
run sample script.
python3 mem_leak_sample2.py
what to place this code?
I don’t understand the question
In C code, I have confirmed that display_text will freed after meta is destroyed. that pyds.free_buffer(text_params.display_text) will not be called, you can add logs before it.
noticing your pipeline is complex, can you use deepstream-test1.py to test this issue, which has similar code “display_text =”.
I ran the deepstream-test1.py sample script and I was able to reproduce the problem. I had to modify the script so the pipeline can run indefinitely. Here are the changes I made:
Replaced filesrc with multifilesrc.
I used a video that does not have a defined “Media length” to allow multifilesrc to loop the video forever. This is the video (1.2 MB) I used.
Replaced nveglglessink with fakesink, since our devices do not have a display.
Here’s the memory consumption with additional display_meta objects :
I also ran the code without creating display_meta objects. I removed the following line: osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)
Here’s the memory consumption without display_meta objects.
I replaced nveglglessink with fakesink, used ffmpeg to combine 10 sample_720p.h264 to a long video sample_720p_x10.h264, after testing again I did not see obvious memory leak,from the log, VmRSS kept 1018.4531 MiB for a log time, if there is memory leak, this value will continue to go up. here is the code and report. deepstream_test_1.py (9.8 KB) log.txt (28.5 KB)
I looked at your logs and it looks like you only ran the code for for a little over 2 mins. In order to see the memory leak I had to run it for hours (see the pictures I have sent so far). Please run the code for at least 3 hours to see the memory trend. Also switch to multifilesrc and use the video uploaded before to run the pipeline indefinitely.
I used the memory-profiler library to profile the memory consumption
This is the command I used to run the code: mprof run --include-children --multiprocess --output mem_leak_sample4.dat python3 deepstream_test_1.py h264.h264
There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks