Possible memory leak in nvvidconv GStreamer plugin

Hi,

I’ve been running into a memory leak problem when continuously capturing video from a V4L2 camera (e-CAM30_CUNANO) using the GStreamer Python bindings.

I’ve attached a code snippet to illustrate the problem. The code can run one of two GStreamer pipelines (depending on the value of the use_nvvidconv variable), both of which acquire 720p video from a camera and save it to file. The only difference between the two pipelines is that one uses the nvvidconv element (and NVMM memory) to convert from UYVY to I420 format, while the other uses the standard videoconvert element.

When run, the snippet continuously acquires 3 second long MKV videos, and tracks the amount of used memory in a CSV file. After the first few iterations, it can be seen that when using the nvvidconv pipeline, the amount of used memory keeps increasing indefinitely. Eventually the board runs out of memory and Linux kills the process. This does not happen with videoconvert: using this element memory consumption stabilizes after a few iterations.

import time
import psutil

import gi
gi.require_version('Gst', '1.0')
gi.require_version('Gtk', '3.0')
from gi.repository import GObject, Gst, Gtk

Gst.init(None)

use_nvvidconv = True
if use_nvvidconv:
    # Simple pipeline for MKV acquisition
    pipeline = Gst.parse_launch(
        'v4l2src '
        '! video/x-raw, format=UYVY, width=1280, height=720, format=UYVY '
        '! nvvidconv '
        '! video/x-raw(memory:NVMM), format=I420 '
        '! omxh264enc qp-range=20,20:20,20:-1,-1 '
        '! matroskamux '
        '! filesink name=file_sink location=test.mkv'
    )
else:
    # The same pipeline without the nnvidconv plugin
    pipeline = Gst.parse_launch(
        'v4l2src '
        '! video/x-raw, format=UYVY, width=1280, height=720, format=UYVY '
        '! videoconvert '
        '! video/x-raw, format=I420 '
        '! omxh264enc qp-range=20,20:20,20:-1,-1 '
        '! matroskamux '
        '! filesink name=file_sink location=test.mkv'
    )

# Reset the memory file
with open('memory_tracker.csv', 'w') as f:
    f.truncate(0)
    
while True:  # continuously capture 3 seconds long .mkv videos
    non_available_memory = str(psutil.virtual_memory().percent)
    with open('memory_tracker.csv', 'a') as f:
        f.write(non_available_memory + '\n')
    print('\n' + non_available_memory + '\n')

    pipeline.set_state(Gst.State.PLAYING)
    time.sleep(3)
    pipeline.send_event(Gst.Event.new_eos())
    time.sleep(1/60)
    pipeline.set_state(Gst.State.NULL)

In order to check whether the problem was in the Python bindings for GStreamer, I’ve implemented the same code in C++:

#include <chrono>
#include <iomanip>
#include <iostream>
#include <fstream>
#include <string>
#include <thread>

#include <gst/gst.h>

/* Returns the non-available RAM percentage */
double get_memory_percentage() {
    std::string token, unit;
    unsigned long mem, total, available;
    std::ifstream file("/proc/meminfo");
    while (file >> token >> mem >> unit) {
        if (token == "MemTotal:") {
            total = mem;
        }
        if (token == "MemAvailable:") {
            available = mem;
        }
    }
    return double(total - available) / double(total) * 100.;
}

/* Updates memory CSV with current non-available RAM percentage and event */
void update_memory_csv(std::string event) {
    double percentage = get_memory_percentage();
    std::ofstream memory_file;
    memory_file.open("memory_leak.csv", std::ios_base::app);
    memory_file << std::fixed << std::setprecision(1) 
                << percentage << " " << event << "\n";
    memory_file.close();
}

int main(int argc, char *argv[]) {
    gst_init(&argc, &argv);

    // Reset the memory file
    std::ofstream memory_file;
    memory_file.open("memory_leak.csv", std::ios_base::out);
    memory_file.close();

    GstElement *pipeline = gst_parse_launch(
        "v4l2src "
        "! video/x-raw, framerate=60/1, width=(int)1280, "
            "height=(int)720, format=(string)UYVY "
        "! nvvidconv "
        "! video/x-raw(memory:NVMM), format=(string)I420 "
        "! omxh264enc qp-range=20,20:20,20:-1,-1 "
        "! matroskamux "
        "! filesink location=test.mkv",
        NULL
    );
    for (int i=0; i < 1; i++) {
        update_memory_csv("ready");

        // Start recording
        gst_element_set_state(pipeline, GST_STATE_PLAYING);
        update_memory_csv("started_recording");
        std::cout << "\033[0;32m" << "Started recording" << "\033[0m" << std::endl;

        std::this_thread::sleep_for(std::chrono::seconds(3));

        // Stop recording
        gst_element_send_event(pipeline, gst_event_new_eos());
        update_memory_csv("sent_eos");
        gst_element_set_state(pipeline, GST_STATE_NULL);
        update_memory_csv("stopped_recording");
        std::cout << "\033[0;31m" << "Stopped recording" << "\033[0m" << std::endl;
        std::this_thread::sleep_for(std::chrono::seconds(1/60));

        // Blank line for clarity
        memory_file.open("memory_leak.csv", std::ios_base::app);
        memory_file << "\n";
        memory_file.close();
    }
}

However I get the same result as before, and only experience memory leak when running the nvvidconv pipeline. Since I can’t forego HW acceleration in my application, I’ve been getting around the issue by periodically exiting and restarting the Python process once memory consumption reaches a certain threshold, however I would prefer a more “proper” solution.

The code was tested on a Jetson Nano using Jetpack 4.2.2, overclocked by running:

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

and using GStreamer version 1.14.5.

Any help would be greatly appreciated!

Hi,
Please apply the prebuilt lib and try again.

Hi,
Thank you for you answer.

My apologies, but how am I supposed to use this library? Should I replace libv4l2_nvvidconv with it under /usr/lib/aarch64-gnu/tegra?

Hi,
Please replace

/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvvidconv.so
1 Like

Hi,
I’ve replaced the file as instructed and the problem does appear to be solved.

Thank you so much! :)