Memory Leak in Video Recording

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) AGX XAVIER
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.4
• TensorRT Version 7.0

We are trying to capture an event that occurs in the live stream within 5 seconds duration video using a dynamic pipeline with nvv4l2h264enc, nvvideoconvert and file sink.

  • In the main pipeline we add a tee that dynamically add a recording pipeline when we call start_recording function .

  • In the start_recording function we add and link the recording pipeline elements(queue-> nvvideoconvert->capsfilter->nvv4l2h264enc->h264parse->qtmux->filesink), then we call g_timeout_add_seconds to set a 5 seconds timer and stop recording by calling timeout_cb_0 (timer callback).

  • In the timer callback we add an IDLE probe on the tee src pad and unlink the dynamic pipeline elements by calling unlink_cb (idle probes callback)

  • In the idle probes callback we send an EOS event and remove, set null state and unref pipeline elements

Unfortunately, we noticed that the memory use increases after saving each video, do you have any advice to avoid this memory leak problem?

What kind of memory leak? Can you make sure where the leak come from? Is the leak related to any DeepStream plugin? Have you tried deepstream-app with smart recording? Is there any memory leak with deepstream-app?

It is hard to check memory leak with eyes.
Can you try the tool valgrind? (https://valgrind.org/ Valgrind) We found it is a good tool to identify the place causing memory leak.

The following is a sample command line:
valgrind --tool=memcheck --leak-check=full --show-leak-kinds=all --track-origins=yes --log-file=valgrind-out_2.log

That code looks familiar. ;-)

When I was using DS4.02 with the same logic I too had memory leaks. I found a number of ways to get around it in the following order:

  1. created a wrapper python and used the subprocess module to start my deepstream app and also monitor system memory. When avail memory got below 10% I restarted the deepstream app. Restarts are pretty quick when you already have the nvinfer engine files generated.

  2. avoid the source of the leaks by full time re-encoding h264/h265 and put your tee after this where you dynamically had a file muxer and filesink. Or alternatively put your recording tee before you decode your source - that way you don’t need to re-encode and therefore won’t get the memory leaks.

  3. switch to using smart record so you can remove all the dynamic pipeline manipulation and simplify your code.

please find attached output log file.

Thank you for sharing the solutions.

  • We need to record more than one video at the same time, which is not supported by Smart Record.
  • Regarding the second solution, we thy the the following pipeline:
    nvstreammux → nvinfer → nvtracker → nvvideoconvert → capsfilter(video/x-raw, format=RGBA) → dsexample → nvvideoconvert → capsfilter(video/x-raw(memory:NVMM), format=(string)I420) → nvv4l2h264enc → h264parse → tee → queue → fakesink
    but we get Segmentation fault (core dumped)

I’ve tried with DGPU and Jetson NX board with the following pipeline under /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app folder, there is no error.

gst-launch-1.0 uridecodebin uri=file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_1080p_h264.mp4 ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=config_infer_primary.txt ! nvtracker tracker-width=640 tracker-height=480 ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so enable-batch-process=1 ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! dsexample ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=I420’ ! nvv4l2h264enc ! h264parse ! tee ! queue ! fakesink

@shahad Why do you think recording more than one source is not possible?

This is the output when launching the pipeline.

I want to record overlapped videos from the same source.

So there is no segment fault with the pipeline.

  1. To record a period of video only needs to change the encoder->muxer->filesink branch state and send EOS event. You don’t need to add/remove pads and elements during playback.
  2. It is possible to record overlapped videos from the same source. E.g. using tee to link to two encoder->muxer->filesink bin, you can control the state and event for every bin separately. When the start/stop recording duration have overlap, the recorded videos have overlap. And this is also how DeepStream smart recording works.

The smart record interfaces are in /opt/nvidia/deepstream/deepstream-5.0/sources/includes/gst-nvdssr.h

1 Like

Your current pipeline will add and remove the green part in your following pipeline

To use smart record interfaces, you can implement the following pipeline without any “add/remove” elements operation.

You can control the “smart-rec-video-cache”, “smart-rec-duration” and “smart-rec-start-time” parameters together with "NvDsSRStart " and “NvDsSRStop” of every “smart recording bin” to make the video overlapped or not.

https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream_Development_Guide/deepstream_smart_video_record.html#

As I understood from the picture that the number of smart recording bins is associated with how many overlapped video I want to save at the same time, Is that right? If yes then it will not solve my problem since I’m recording 5 seconds of unlimited events occurring in the same source and I don’t know how many events will happen at the same time. Is there a dynamic way to generate the smart recording bin when multiple events are being recorded?

The answer for your first question is yes.
As to your requirement of randomly recording of random number of overlapped videos, there is no easy way to do this.

we wrote a code that meets these requirements using dynamic pipeline it works well, but it has memory leak problem as I said before.

Have you found out which part leak? The valgrind log showed so much leaking in almost every module, it may be a logic problem.

No, I think the memory leak comes from nvvideoconvert + nvv4l2h264enc as explained here Nvvideoconvert element produces memory increases in a create-start-stop-delete sequence

@shahad This problem may be the same with GStreamer bug gstbaseparse: High memory usage in association index for long duration files (#468) · Issues · GStreamer / gstreamer · GitLab.