Nvivafilter multiple pipelines memory leak

Hi @DaveYYY
Can I make sure how did you run the APP? - Sure here are the steps:

  1. export DISPLAY=:0
  2. ./test

And here is the output:

Opening in BLOCKING MODE
complete
NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
Opening in BLOCKING MODE
complete
NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
Opening in BLOCKING MODE
complete
NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
H264: Profile = 77, Level = 0
H264: Profile = 77, Level = 0
H264: Profile = 77, Level = 0

Like is any librariy other than GStreamer required? - No, nothing else as you can see this only requires gstreamer
g++ -Wall -std=c++11 test_leak.cpp -o test $(pkg-config --cflags --libs gstreamer-app-1.0) -ldl

Update: I was wrong about nvivafilter being the reason for memory leak.
Here is why.
Somehow removing of nvivafilter blocks (no errors it just gets “stuck”) 1 out of 3 pipelines randomly and that prevents memory leak from happening. Didn’t spot it before because it displays last frame and I had to put something active in front of camera to spot it. As I’ve mentioned above 2 pipelines do run without memory leaks.

Also want to know how did you record the memory usage to get the diagram. - Sure.

  1. Script for collecting memory usage (simple bash script).
    mem_usage_script.txt (116 Bytes)
  2. Script for plotting using gnuplot (you will have to install gnuplot)
    plot_memory.txt (206 Bytes)

Somehow it is related to DSI output? Probably - yes. DSI has 60 Hz refresh rate and HDMI monitor has 144Hz. When I’m running 3 pipelines simultaneously I do see frame drops on all nv3dsinks. Instead of 30 FPS I get only 20 which is 60 / 3 similar to this post:

And it also explains why everything works well with up to 2 pipelines, it is because we are requesting 30 fps and 60/2 = 30 so it can keep up with performance.

Is it all tied to the screen refresh rate?
If I fork pipelines like this:

 	pid_t pid = fork();
	
     if (pid < 0) {
        std::cerr << "Fork failed.\n";
        return 1;
    }

    if (pid == 0) {
		gst_init (&argc, &argv);
        start(0);
    }
	
	pid = fork();
	
    if (pid < 0) {
        std::cerr << "Fork failed.\n";
        return 1;
    }

    if (pid == 0) {
		gst_init (&argc, &argv);
        start(1);
    }

	pid = fork();
	
    if (pid < 0) {
        std::cerr << "Fork failed.\n";
        return 1;
    }

    if (pid == 0) {
		gst_init (&argc, &argv);
        start(2);
    }

Instead of:

gst_init (&argc, &argv);

start(0);
start(1);
start(2);

The issue goes away.

Any ideas would be appreciated!