Memory Leak in NVENC encoders on Jetson Orin NX/AGX for jetpack 6.2 [L4T 36.4.3]

Hey guys,
I am on a Nvidia Jetson Orin NX Developer Kit - Jetpack 6.2 [L4T 36.4.3]. I am also able to reproduce the same behaviour on Jetson AGX development kit.

I have noticed that when I use a NVENC encoder for my gstreamer pipelines - there is a constant memory leak that is happening.
I have a gstreamer pipeline that uses google’s congestion control internally and everytime the caps are renegotiated - A new NVENC session is initialized which spawns up around 5 new threads without cleaning up the previously spawned threads. This causes the memory to grow
continuously and fill up the whole RAM.
I can also confirm that this issue doesnt show up when I switch from using nvv4l2h264enc to cpu based x264enc without changing anything else in the pipeline.The following block shows up repeatedly and each time this block shows up in logs , ~ 5 new threads are spawned on the cpu.

 H264: extProfile = 2 Level = 0
 NVMEDIA: Need to set EMC bandwidth : 126000
 NvVideo: bBlitMode is set to TRUE
 NvMMLiteOpen : Block : BlockType = 4
 ===== NvVideo: NVENC =====
 NvMMLiteBlockCreate : Block : BlockType = 4
 H264: extProfile = 2 Level = 0
 NVMEDIA: Need to set EMC bandwidth : 376000
 NvVideo: bBlitMode is set to TRUE
 NvMMLiteOpen : Block : BlockType = 4
 ===== NvVideo: NVENC =====
 NvMMLiteBlockCreate : Block : BlockType = 4
 extProfile = 2 Level = 0
 NVMEDIA: Need to set EMC bandwidth : 2872000
 NvVideo: bBlitMode is set to TRUE

Please let me know if you require any additional information for debugging or reproducing this.

Hi,
It looks like you don’t terminate the gstreamer pipeline before re-launching it. Please share why the caps are renegotiated. If the data source changes resolution, the encoder has to be terminated and re-initialized with new resolution.

And please try to reproduce the issue on developer kit with Jetpack 6.2.1 r36.4.4. If you can reproduce it, please share us the test sample. We will set up and check.

Hey @DaneLLL
I am streaming this data over webrtc, I need to do dynamic congestion control and this needs to happen without terminating the pipeline. The caps are renegotiated because when congestion control is used to reduce the bitrate, the video resolution dynamically scales down to accomodate the newer bitrate.

The main issue is happening during this termination and reinitialization where previously held threads are not being cleaned up causing continous increase in memory everytime the caps are renegotiated with the encoder.

This behaviour is only observed when using NVENC encoders and not on other encoders, I have tested the other parts of the pipeline via CPU based encoders like x264enc and the encoder session is cleared up without a memory leak and reinitialized properly whenever cap reneogotiation happens.

I will try out this in jetpack 6.2.1 r36.4.4 and post updates here shortly.

Hi,
Runtime resolution change is not supported in nvv4l2h264enc and nvv4l2h265enc plugins. The pipeline has to be destroyed and re-initialized. Please share a test sample. We will check with our team and do further suggestion.

Hi,

I am sharing a simple script that kind of simulates the congestion control behaviour to help test this out. Actually matching the original test scenario with live congestion control might involve a lot of custom setup.

The way to run this script is as follows

./test-encoders.py --encoder gpu
# For testing via cpu encoder.
./test-encoders.py --encoder cpu

Running the below script should cause continous memory increase when using the gpu encoder and can be profiled by looking at jtop.

#!/usr/bin/env python3
"""
A GStreamer script to test dynamic bitrate and resolution changes.

This script simulates adaptive streaming by cycling through a predefined ladder
of resolutions and bitrates. It can be configured to use either a
CPU-based encoder (x264enc) or a GPU-based NVIDIA encoder (NVENC).
"""
from __future__ import annotations
import argparse
import sys
import signal
import gi

gi.require_version("Gst", "1.0")
from gi.repository import Gst, GLib

Gst.init(None)


# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------

def make_element(factory: str, name: str | None = None, **props) -> Gst.Element:
    """Create a GStreamer element with error checking."""
    elem = Gst.ElementFactory.make(factory, name)
    if elem is None:
        print(
            f"[error] Required element '{factory}' is missing.\n"
            "        Please check your GStreamer installation (gst-inspect-1.0 {factory}).",
            file=sys.stderr,
        )
        sys.exit(1)
    for k, v in props.items():
        elem.set_property(k, v)
    return elem


def get_encoder_pipeline_elements(encoder_type: str) -> tuple[str, str, str]:
    """
    Selects the appropriate converter, caps format, and encoder based on user choice.

    Returns a tuple of: (converter_name, caps_string_format, encoder_name)
    """
    if encoder_type == "gpu":
        # Check for NVIDIA converters first
        conv_cand = next((c for c in ("nvvconv", "nvvidconv", "nvvideoconvert") if Gst.ElementFactory.find(c)), None)
        if not conv_cand:
            print("[error] No NVIDIA video converter found (e.g., nvvideoconvert). GPU encoding is not possible.", file=sys.stderr)
            sys.exit(1)

        # Check for NVIDIA encoders
        enc_cand = next((e for e in ("nvv4l2h264enc", "nvh264enc") if Gst.ElementFactory.find(e)), None)
        if not enc_cand:
            print("[error] No NVIDIA H.264 encoder found (nvv4l2h264enc, nvh264enc).", file=sys.stderr)
            sys.exit(1)

        # GPU encoders work best with NVMM memory
        caps_format = "video/x-raw(memory:NVMM),format=I420"
        return conv_cand, caps_format, enc_cand

    elif encoder_type == "cpu":
        # CPU encoding requires a standard converter
        if not Gst.ElementFactory.find("videoconvert"):
            print("[error] 'videoconvert' element not found. CPU encoding is not possible.", file=sys.stderr)
            sys.exit(1)

        # Check for the CPU-based x264enc
        if not Gst.ElementFactory.find("x264enc"):
            print("[error] 'x264enc' element not found. Please install gst-plugins-ugly.", file=sys.stderr)
            sys.exit(1)

        # CPU encoders use standard system memory
        caps_format = "video/x-raw,format=I420"
        return "videoconvert", caps_format, "x264enc"

    else:
        # This case is for safety; argparse should prevent it.
        print(f"[error] Invalid encoder type specified: {encoder_type}", file=sys.stderr)
        sys.exit(1)


# ---------------------------------------------------------------------------
# Pipeline builder
# ---------------------------------------------------------------------------

def build_pipeline(
    use_device: str | None,
    pattern: int,
    size: str,
    conv_name: str,
    encoder_name: str,
    caps_format_str: str,
) -> Gst.Pipeline:
    """Build the GStreamer pipeline with the specified elements."""
    try:
        width, height = (int(x) for x in size.lower().split("x"))
    except ValueError:
        print(f"[error] Invalid size format: {size}. Use WxH format (e.g., 640x480)", file=sys.stderr)
        sys.exit(1)

    # Create source
    src = make_element("v4l2src", "src", device=use_device) if use_device else \
          make_element("videotestsrc", "src", is_live=True, pattern=pattern)

    # Raw caps for the source output
    caps_raw = make_element(
        "capsfilter",
        "caps_raw",
        caps=Gst.Caps.from_string(
            f"video/x-raw,format=NV12,width={width},height={height},framerate=30/1"
        )
    )

    # Converter and its output caps
    conv = make_element(conv_name, "conv")
    caps_enc_in = make_element(
        "capsfilter",
        "caps_enc_in",
        caps=Gst.Caps.from_string(
            f"{caps_format_str},width={width},height={height},framerate=30/1"
        )
    )

    # Encoder
    if encoder_name == "nvv4l2h264enc":
        enc = make_element(encoder_name, "enc", bitrate=3_000_000, insert_sps_pps=True, idrinterval=30)
    elif encoder_name == "nvh264enc":
        enc = make_element(encoder_name, "enc", bitrate=3000, gop_size=30)  # Uses kbps
    else:  # x264enc
        enc = make_element(encoder_name, "enc", bitrate=3000, key_int_max=30, tune="zerolatency") # Uses kbps

    parse = make_element("h264parse", "parse")
    sink = make_element("fakesink", "sink", sync=False)

    # Create and build pipeline
    pipeline = Gst.Pipeline.new("adaptive-pipeline")
    elements = [src, caps_raw, conv, caps_enc_in, enc, parse, sink]
    for elem in elements:
        pipeline.add(elem)

    # --- FIX: Link elements one-by-one for better error reporting ---
    if not src.link(caps_raw):
        print("[error] Failed to link source to raw caps filter", file=sys.stderr)
        sys.exit(1)
    if not caps_raw.link(conv):
        print("[error] Failed to link raw caps filter to converter", file=sys.stderr)
        sys.exit(1)
    if not conv.link(caps_enc_in):
        print("[error] Failed to link converter to encoder input caps filter", file=sys.stderr)
        sys.exit(1)
    if not caps_enc_in.link(enc):
        print("[error] Failed to link encoder input caps filter to encoder", file=sys.stderr)
        sys.exit(1)
    if not enc.link(parse):
        print("[error] Failed to link encoder to parser", file=sys.stderr)
        sys.exit(1)
    if not parse.link(sink):
        print("[error] Failed to link parser to sink", file=sys.stderr)
        sys.exit(1)
    # --- End of FIX ---

    print(f"[info] Using converter: {conv_name}")
    print(f"[info] Using encoder: {encoder_name}")
    return pipeline


def setup_bus_watch(pipeline: Gst.Pipeline) -> None:
    """Set up bus message handling for debugging."""
    bus = pipeline.get_bus()
    bus.add_signal_watch()

    def on_message(bus, message):
        t = message.type
        if t == Gst.MessageType.ERROR:
            err, debug = message.parse_error()
            print(f"[error] {err}: {debug}", file=sys.stderr)
            GLib.MainLoop().quit()
        elif t == Gst.MessageType.WARNING:
            warn, debug = message.parse_warning()
            print(f"[warning] {warn}: {debug}", file=sys.stderr)
        elif t == Gst.MessageType.EOS:
            print("[info] End of stream")
            GLib.MainLoop().quit()

    bus.connect("message", on_message)


# ---------------------------------------------------------------------------
# Main
# ---------------------------------------------------------------------------

def main() -> None:
    parser = argparse.ArgumentParser(description="Dynamic bitrate and resolution adaptation test.")
    parser.add_argument("--device", help="/dev/videoX to use instead of test pattern")
    parser.add_argument("--encoder", type=str, choices=["cpu", "gpu"], default="gpu", help="Encoder to use: 'cpu' (x264enc) or 'gpu' (NVENC)")
    parser.add_argument("--period", type=float, default=3.0, help="Seconds between adaptations")
    parser.add_argument("--pattern", type=int, default=0, help="videotestsrc pattern ID")
    parser.add_argument("--mode", choices=["bitrate", "resolution", "both"], default="both",
                       help="What to adapt: bitrate only, resolution only, or both")
    args = parser.parse_args()

    adaptation_ladder = [
        {"width": 320, "height": 240, "bitrate": 150_000, "name": "QVGA"},
        {"width": 426, "height": 240, "bitrate": 300_000, "name": "240p"},
        {"width": 640, "height": 360, "bitrate": 600_000, "name": "360p"},
        {"width": 854, "height": 480, "bitrate": 1_200_000, "name": "480p"},
        {"width": 1280, "height": 720, "bitrate": 2_500_000, "name": "720p"},
        {"width": 1920, "height": 1080, "bitrate": 4_000_000, "name": "1080p"},
    ]
    current_level = 2
    direction = 1  # 1 for up, -1 for down

    # Determine encoder-specific elements before building the pipeline
    conv_name, caps_format_str, encoder_name = get_encoder_pipeline_elements(args.encoder)

    # Build initial pipeline
    initial_config = adaptation_ladder[current_level]
    initial_size = f"{initial_config['width']}x{initial_config['height']}"
    pipeline = build_pipeline(args.device, args.pattern, initial_size, conv_name, encoder_name, caps_format_str)

    # Get elements for dynamic changes
    enc: Gst.Element = pipeline.get_by_name("enc")
    caps_raw: Gst.Element = pipeline.get_by_name("caps_raw")
    caps_enc_in: Gst.Element = pipeline.get_by_name("caps_enc_in")

    setup_bus_watch(pipeline)

    # Determine if the chosen encoder uses bits/sec or kbits/sec
    uses_bps = encoder_name == "nvv4l2h264enc"

    # Set initial bitrate
    initial_bitrate = initial_config["bitrate"]
    enc.set_property("bitrate", initial_bitrate if uses_bps else initial_bitrate // 1000)
    print(f"[init] Starting with {initial_config['name']} - {initial_config['width']}x{initial_config['height']} @ {initial_bitrate/1000:.1f} kbps")

    def adapt_stream() -> bool:
        nonlocal current_level, direction
        current_level += direction
        if not 0 <= current_level < len(adaptation_ladder):
            direction *= -1
            current_level += 2 * direction
            current_level = max(0, min(current_level, len(adaptation_ladder) - 1))

        config = adaptation_ladder[current_level]
        new_bitrate = config["bitrate"]
        new_width, new_height = config["width"], config["height"]

        # Apply changes based on mode
        if args.mode in ["bitrate", "both"]:
            enc.set_property("bitrate", new_bitrate if uses_bps else new_bitrate // 1000)

        if args.mode in ["resolution", "both"]:
            try:
                # Update resolution by changing caps dynamically
                new_caps_raw = Gst.Caps.from_string(f"video/x-raw,format=NV12,width={new_width},height={new_height},framerate=30/1")
                caps_raw.set_property("caps", new_caps_raw)

                # Use the correct caps format string (with or without NVMM)
                new_caps_enc_in = Gst.Caps.from_string(f"{caps_format_str},width={new_width},height={new_height},framerate=30/1")
                caps_enc_in.set_property("caps", new_caps_enc_in)

                print(f"[adapt] → {config['name']} ({new_width}x{new_height} @ {new_bitrate/1000:.1f} kbps)")
            except Exception as e:
                print(f"[warning] Resolution change failed: {e}. Adapting bitrate only.", file=sys.stderr)
                print(f"[adapt] → bitrate only: {new_bitrate/1000:.1f} kbps")
        else:
            print(f"[adapt] → {config['name']} bitrate: {new_bitrate/1000:.1f} kbps")

        return True  # Keep the timer running

    GLib.timeout_add(max(int(args.period * 1000), 1000), adapt_stream)
    main_loop = GLib.MainLoop()

    def sigint_handler(_sig, _frame):
        print("\nStopping…")
        pipeline.set_state(Gst.State.NULL)
        main_loop.quit()

    signal.signal(signal.SIGINT, sigint_handler)

    # Start pipeline
    ret = pipeline.set_state(Gst.State.PLAYING)
    if ret == Gst.StateChangeReturn.FAILURE:
        print("[error] Failed to start pipeline.", file=sys.stderr)
        sys.exit(1)

    print("Pipeline running. Press Ctrl‑C to stop.")
    print("Adaptation ladder: QVGA ↔ 240p ↔ 360p ↔ 480p ↔ 720p ↔ 1080p")

    try:
        main_loop.run()
    except KeyboardInterrupt:
        sigint_handler(None, None)
    finally:
        pipeline.set_state(Gst.State.NULL)


if __name__ == "__main__":
    main()
1 Like

Hi,
We try the sample on Orin NX developer kit with r36.4.3 and hit the issue:

$ python3 ./test-encoders.py --encoder cpu
[info] Using converter: videoconvert
[info] Using encoder: x264enc
[init] Starting with 360p - 640x360 @ 600.0 kbps
Pipeline running. Press Ctrl‑C to stop.
Adaptation ladder: QVGA ↔ 240p ↔ 360p ↔ 480p ↔ 720p ↔ 1080p
[adapt] → 480p (854x480 @ 1200.0 kbps)
[warning] gst-stream-error-quark: not negotiated (11): ../libs/gst/base/gstbaset
ransform.c(1431): gst_base_transform_reconfigure_unlocked (): /GstPipeline:adapt
ive-pipeline/GstCapsFilter:caps_raw:
not negotiated
[adapt] → 720p (1280x720 @ 2500.0 kbps)
[warning] gst-stream-error-quark: not negotiated (11): ../libs/gst/base/gstbaset
ransform.c(1431): gst_base_transform_reconfigure_unlocked (): /GstPipeline:adapt
ive-pipeline/GstCapsFilter:caps_raw:
not negotiated
[adapt] → 1080p (1920x1080 @ 4000.0 kbps)
[adapt] → 720p (1280x720 @ 2500.0 kbps)
[warning] gst-stream-error-quark: not negotiated (11): ../libs/gst/base/gstbaset
ransform.c(1431): gst_base_transform_reconfigure_unlocked (): /GstPipeline:adapt
ive-pipeline/GstCapsFilter:caps_raw:
not negotiated
$ python3 ./test-encoders.py --encoder gpu
[info] Using converter: nvvidconv
[info] Using encoder: nvv4l2h264enc
[init] Starting with 360p - 640x360 @ 600.0 kbps
Opening in BLOCKING MODE
Pipeline running. Press Ctrl‑C to stop.
Adaptation ladder: QVGA ↔ 240p ↔ 360p ↔ 480p ↔ 720p ↔ 1080p
NvMMLiteOpen : Block : BlockType = 4
===== NvVideo: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
H264: Profile = 66 Level = 0
NVMEDIA: Need to set EMC bandwidth : 126000
NvVideo: bBlitMode is set to TRUE
[adapt] → 480p (854x480 @ 1200.0 kbps)
[warning] gst-stream-error-quark: not negotiated (11): ../libs/gst/base/gstbaset
ransform.c(1431): gst_base_transform_reconfigure_unlocked (): /GstPipeline:adapt
ive-pipeline/GstCapsFilter:caps_raw:
not negotiated
NvMMLiteOpen : Block : BlockType = 4
===== NvVideo: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
nvbufsurface: NvBufSurfaceCopy: buffer param mismatch
nvbufsurface: NvBufSurfaceCopy: failed to copy
H264: Profile = 66 Level = 0
NVMEDIA: Need to set EMC bandwidth : 376000
NvVideo: bBlitMode is set to TRUE
nvbufsurface: NvBufSurfaceCopy: buffer param mismatch
nvbufsurface: NvBufSurfaceCopy: failed to copy
nvbufsurface: NvBufSurfaceCopy: buffer param mismatch
nvbufsurface: NvBufSurfaceCopy: failed to copy
nvbufsurface: NvBufSurfaceCopy: buffer param mismatch
nvbufsurface: NvBufSurfaceCopy: failed to copy

(...skip...)

[adapt] → 720p (1280x720 @ 2500.0 kbps)
[warning] gst-stream-error-quark: not negotiated (11): ../libs/gst/base/gstbaset
ransform.c(1431): gst_base_transform_reconfigure_unlocked (): /GstPipeline:adapt
ive-pipeline/GstCapsFilter:caps_raw:
not negotiated
NvMMLiteOpen : Block : BlockType = 4
===== NvVideo: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
nvbufsurface: NvBufSurfaceCopy: buffer param mismatch
nvbufsurface: NvBufSurfaceCopy: failed to copy
H264: Profile = 66 Level = 0
NVMEDIA: Need to set EMC bandwidth : 376000
NvVideo: bBlitMode is set to TRUE
nvbufsurface: NvBufSurfaceCopy: buffer param mismatch
nvbufsurface: NvBufSurfaceCopy: failed to copy
nvbufsurface: NvBufSurfaceCopy: buffer param mismatch
nvbufsurface: NvBufSurfaceCopy: failed to copy

Please help check what is missing, to help us run the sample correctly.

And as mentioned before, runtime resolution change without re-initializing nvv4l2h264enc is not supported. This is a feature request and we will need to evaluate this. This would take some time. Please note this.

Hey

These errors are expected and its mainly happening because of how we are trying to simulate congestion control in a crude way - Apart from that its still running correctly.
If you monitor the memory usage when running this script you ll notice in the gpu case the memory keeps increasing which does not happen for the cpu scenario. The only change within the actual pipeline here is switching the encoders so I really do think this is an issue that needs to be patched.

I would recommend using jtop for monitoring this when running the gpu pipeline.

I think internally NVENC is reinitializing correctly - but its just not clearing up the previously spawned threads etc. This kind of makes these encoders unusable for any practical streaming solution via WebRTC etc. I hope the team sees the issue in this and tries to arrive at feature parity with other existing encoders.

1 Like

Hi,
Would like to confirm. In using videoconvert + x264enc plugins, even though this message is printed:

[warning] gst-stream-error-quark: not negotiated (11): ../libs/gst/base/gstbasetransform.c(1431): gst_base_transform_reconfigure_unlocked (): /GstPipeline:adaptive pipeline/GstCapsFilter:caps_raw: not negotiated

The resolution still changes successfully. Is this correct?

And we may take some time to check this. Is there a chance you can consider use jetson_multimedia_api? The demonstration is in 01_video_encode sample. Once resolution changes, you can destroy NvVideoEncoder and all NvBufSurface, and re-initialize them with new resolution.

Hey yes,
The error that you’re seeing is safe to ignore.
Apologies for giving you a script with fakesink, I have modified the below script to actually display the test stream with fps info.
However I must say its not very straightforward to observe this memory leak unless you’re running the pipeline continously for very long times.
The best way I have found untill now is to have htop open showing process threads.
In the cpu encoder scenario - you will see that the threads clean up correctly so they increase and decrease as required.
However In the gpu encoder scenario - it keeps increasing.
You might need to wait for atleast 5 mins to actually see the siginificant difference between both the scenarios.

Also wrt to using jetson_multimedia_api - I am not sure that would be feasible mainly because of how many moving parts exist that would need to be changed. We are deeply integrated into a gstreamer’s wide range of plugins + rust codebase so trying to get this to work would mean creating a new gstreamer plugin that is doing the exact same thing as what the official plugin is supposed to do.

#!/usr/bin/env python3
"""
A GStreamer script to test dynamic bitrate and resolution changes,
with an FPS overlay via fpsdisplaysink → autovideosink.
"""

from __future__ import annotations
import argparse
import sys
import signal
import gi

gi.require_version("Gst", "1.0")
from gi.repository import Gst, GLib

Gst.init(None)


def make_element(factory: str, name: str | None = None, **props) -> Gst.Element:
    """Create a GStreamer element with error checking."""
    elem = Gst.ElementFactory.make(factory, name)
    if elem is None:
        print(
            f"[error] Required element '{factory}' is missing.\n"
            f"        Please check your GStreamer installation (gst-inspect-1.0 {factory}).",
            file=sys.stderr,
        )
        sys.exit(1)
    for k, v in props.items():
        elem.set_property(k, v)
    return elem


def get_encoder_pipeline_elements(encoder_type: str) -> tuple[str, str, str]:
    """
    Selects the appropriate converter, caps format, and encoder based on user choice.
    Returns (converter_name, caps_string_format, encoder_name).
    """
    if encoder_type == "gpu":
        # NVIDIA converters
        conv_cand = next(
            (c for c in ("nvvconv", "nvvidconv", "nvvideoconvert")
             if Gst.ElementFactory.find(c)),
            None
        )
        if not conv_cand:
            print("[error] No NVIDIA video converter found.", file=sys.stderr)
            sys.exit(1)

        # NVIDIA encoders
        enc_cand = next(
            (e for e in ("nvv4l2h264enc", "nvh264enc")
             if Gst.ElementFactory.find(e)),
            None
        )
        if not enc_cand:
            print("[error] No NVIDIA H.264 encoder found.", file=sys.stderr)
            sys.exit(1)

        caps_format = "video/x-raw(memory:NVMM),format=I420"
        return conv_cand, caps_format, enc_cand

    elif encoder_type == "cpu":
        if not Gst.ElementFactory.find("videoconvert"):
            print("[error] 'videoconvert' element not found.", file=sys.stderr)
            sys.exit(1)
        if not Gst.ElementFactory.find("x264enc"):
            print("[error] 'x264enc' element not found. Please install gst-plugins-ugly.", file=sys.stderr)
            sys.exit(1)

        caps_format = "video/x-raw,format=I420"
        return "videoconvert", caps_format, "x264enc"

    else:
        print(f"[error] Invalid encoder type: {encoder_type}", file=sys.stderr)
        sys.exit(1)


def build_pipeline(
    use_device: str | None,
    pattern: int,
    size: str,
    conv_name: str,
    encoder_name: str,
    caps_format_str: str,
) -> Gst.Pipeline:
    """Build the GStreamer pipeline with encoding + FPS-overlay display."""
    try:
        width, height = (int(x) for x in size.lower().split("x"))
    except ValueError:
        print(f"[error] Invalid size format: {size}. Use WxH (e.g., 640x480)", file=sys.stderr)
        sys.exit(1)

    # --- Source & caps ---
    if use_device:
        src = make_element("v4l2src", "src", device=use_device)
    else:
        src = make_element("videotestsrc", "src", is_live=True, pattern=pattern)

    caps_raw = make_element(
        "capsfilter", "caps_raw",
        caps=Gst.Caps.from_string(
            f"video/x-raw,format=NV12,width={width},height={height},framerate=30/1"
        )
    )

    # --- Converter & encoder input caps ---
    conv = make_element(conv_name, "conv")
    caps_enc_in = make_element(
        "capsfilter", "caps_enc_in",
        caps=Gst.Caps.from_string(
            f"{caps_format_str},width={width},height={height},framerate=30/1"
        )
    )

    # --- Encoder ---
    if encoder_name == "nvv4l2h264enc":
        enc = make_element(
            encoder_name, "enc",
            bitrate=3_000_000,
            insert_sps_pps=True,
            idrinterval=30
        )
    elif encoder_name == "nvh264enc":
        enc = make_element(
            encoder_name, "enc",
            bitrate=3000,
            gop_size=30
        )
    else:  # x264enc
        enc = make_element(
            encoder_name, "enc",
            bitrate=3000,
            key_int_max=30,
            tune="zerolatency"
        )

    # --- Parser & Decoder for display branch ---
    parse = make_element("h264parse", "parse")
    decoder = make_element("avdec_h264", "decoder")
    conv2 = make_element("videoconvert", "conv2")

    # --- FPS overlay + video sink ---
    video_sink = make_element("autovideosink", "video_sink")
    fps_sink = make_element(
        "fpsdisplaysink", "fps_sink",
        text_overlay=True,
        sync=False
    )
    fps_sink.set_property("video-sink", video_sink)

    # --- Build pipeline ---
    pipeline = Gst.Pipeline.new("adaptive-pipeline")
    for elem in (
        src, caps_raw, conv, caps_enc_in,
        enc, parse,
        decoder, conv2,
        video_sink, fps_sink
    ):
        pipeline.add(elem)

    # --- Link encoding branch ---
    if not src.link(caps_raw):
        sys.exit("[error] Failed to link src → caps_raw")
    if not caps_raw.link(conv):
        sys.exit("[error] Failed to link caps_raw → conv")
    if not conv.link(caps_enc_in):
        sys.exit("[error] Failed to link conv → caps_enc_in")
    if not caps_enc_in.link(enc):
        sys.exit("[error] Failed to link caps_enc_in → enc")
    if not enc.link(parse):
        sys.exit("[error] Failed to link enc → parse")

    # --- Link display branch ---
    if not parse.link(decoder):
        sys.exit("[error] Failed to link parse → decoder")
    if not decoder.link(conv2):
        sys.exit("[error] Failed to link decoder → conv2")
    if not conv2.link(fps_sink):
        sys.exit("[error] Failed to link conv2 → fps_sink")

    print(f"[info] Using converter: {conv_name}")
    print(f"[info] Using encoder:   {encoder_name}")
    return pipeline


def setup_bus_watch(pipeline: Gst.Pipeline) -> None:
    """Set up bus message handling for logging/debug."""
    bus = pipeline.get_bus()
    bus.add_signal_watch()

    def on_message(bus, message):
        t = message.type
        if t == Gst.MessageType.ERROR:
            err, debug = message.parse_error()
            print(f"[error] {err}: {debug}", file=sys.stderr)
            GLib.MainLoop().quit()
        elif t == Gst.MessageType.WARNING:
            warn, debug = message.parse_warning()
            print(f"[warning] {warn}: {debug}", file=sys.stderr)
        elif t == Gst.MessageType.EOS:
            print("[info] End of stream")
            GLib.MainLoop().quit()

    bus.connect("message", on_message)


def main() -> None:
    parser = argparse.ArgumentParser(
        description="Dynamic bitrate/resolution adaptation with FPS overlay."
    )
    parser.add_argument("--device", help="/dev/videoX to use instead of test pattern")
    parser.add_argument(
        "--encoder", choices=["cpu", "gpu"],
        default="gpu",
        help="Encoder type: 'cpu' (x264enc) or 'gpu' (NVENC)"
    )
    parser.add_argument(
        "--period", type=float, default=3.0,
        help="Seconds between adaptations"
    )
    parser.add_argument("--pattern", type=int, default=0, help="videotestsrc pattern ID")
    parser.add_argument(
        "--mode",
        choices=["bitrate", "resolution", "both"],
        default="both",
        help="Adapt bitrate only, resolution only, or both"
    )
    args = parser.parse_args()

    # adaptation ladder
    ladder = [
        {"width": 320,  "height": 240,  "bitrate": 150_000,  "name": "QVGA"},
        {"width": 426,  "height": 240,  "bitrate": 300_000,  "name": "240p"},
        {"width": 640,  "height": 360,  "bitrate": 600_000,  "name": "360p"},
        {"width": 854,  "height": 480,  "bitrate": 1_200_000, "name": "480p"},
        {"width":1280,  "height": 720,  "bitrate": 2_500_000, "name": "720p"},
        {"width":1920,  "height":1080,  "bitrate": 4_000_000, "name": "1080p"},
    ]
    current_level = 2
    direction = 1

    # select converter & encoder
    conv_name, caps_fmt, enc_name = get_encoder_pipeline_elements(args.encoder)

    # build pipeline
    init_cfg = ladder[current_level]
    pipeline = build_pipeline(
        args.device,
        args.pattern,
        f"{init_cfg['width']}x{init_cfg['height']}",
        conv_name,
        enc_name,
        caps_fmt,
    )

    # element refs for runtime adaptation
    enc = pipeline.get_by_name("enc")
    caps_raw = pipeline.get_by_name("caps_raw")
    caps_in = pipeline.get_by_name("caps_enc_in")

    setup_bus_watch(pipeline)

    # determine bitrate units
    uses_bps = (enc_name == "nvv4l2h264enc")
    # set initial bitrate
    initial_bitrate = init_cfg["bitrate"]
    enc.set_property(
        "bitrate",
        initial_bitrate if uses_bps else initial_bitrate // 1000
    )
    print(
        f"[init] Starting with {init_cfg['name']} - "
        f"{init_cfg['width']}x{init_cfg['height']} @ {initial_bitrate/1000:.1f} kbps"
    )

    def adapt_stream() -> bool:
        nonlocal current_level, direction
        current_level += direction
        if not 0 <= current_level < len(ladder):
            direction *= -1
            current_level += 2 * direction
            current_level = max(0, min(current_level, len(ladder) - 1))

        cfg = ladder[current_level]
        new_bitrate = cfg["bitrate"]

        if args.mode in ("bitrate", "both"):
            enc.set_property(
                "bitrate",
                new_bitrate if uses_bps else new_bitrate // 1000
            )

        if args.mode in ("resolution", "both"):
            try:
                caps_raw.set_property(
                    "caps",
                    Gst.Caps.from_string(
                        f"video/x-raw,format=NV12,"
                        f"width={cfg['width']},height={cfg['height']},"
                        "framerate=30/1"
                    )
                )
                caps_in.set_property(
                    "caps",
                    Gst.Caps.from_string(
                        f"{caps_fmt},width={cfg['width']},"
                        f"height={cfg['height']},framerate=30/1"
                    )
                )
                print(
                    f"[adapt] → {cfg['name']} "
                    f"({cfg['width']}x{cfg['height']} @ {new_bitrate/1000:.1f} kbps)"
                )
            except Exception as e:
                print(f"[warning] Resolution change failed: {e}", file=sys.stderr)
                print(f"[adapt] → bitrate only: {new_bitrate/1000:.1f} kbps")
        else:
            print(f"[adapt] → {cfg['name']} bitrate: {new_bitrate/1000:.1f} kbps")

        return True

    GLib.timeout_add(max(int(args.period * 1000), 1000), adapt_stream)

    # handle SIGINT
    def on_sigint(_, __):
        print("\nStopping…")
        pipeline.set_state(Gst.State.NULL)
        GLib.MainLoop().quit()

    signal.signal(signal.SIGINT, on_sigint)

    # start
    if pipeline.set_state(Gst.State.PLAYING) == Gst.StateChangeReturn.FAILURE:
        print("[error] Failed to start pipeline.", file=sys.stderr)
        sys.exit(1)

    print("Pipeline running. Press Ctrl-C to stop.")
    print("Adaptation ladder: QVGA ↔ 240p ↔ 360p ↔ 480p ↔ 720p ↔ 1080p")

    try:
        GLib.MainLoop().run()
    finally:
        pipeline.set_state(Gst.State.NULL)


if __name__ == "__main__":
    main()

Hi,
We tried the second application but could not exit through Ctrl+C. We will check it with the first application.

The current implementation does not handle runtime resolution change, and we would need some time to check it. Since the plugins are open source, you may also download the source code and check:

Jetson Linux Release 36.4.4 | NVIDIA Developer
Driver Package (BSP) Sources

See if you can check the source code and add caps negotiation.

Hi,
Please download source code package, apply the patches:

diff --git a/gst-v4l2/gstv4l2videoenc.c b/gst-v4l2/gstv4l2videoenc.c
index bbf2c23..e399ec2 100644
--- a/gst-v4l2/gstv4l2videoenc.c
+++ b/gst-v4l2/gstv4l2videoenc.c
@@ -912,8 +912,10 @@ gboolean is_drc (GstVideoEncoder *encoder, GstCaps *input_caps)
   gst_structure_get_int(input_caps_st, "height", &new_height);
 
   GST_INFO_OBJECT(encoder, "curr resolution: [%dx%d], new resolution: [%dx%d]", curr_width, curr_height, new_width, new_height);
-  if ((curr_width != new_width) || (curr_height != new_height))
+  if ((curr_width != new_width) || (curr_height != new_height)) {
+    gst_caps_unref(sink_caps);
     return TRUE;
+  }
 
   gst_caps_unref(sink_caps);
   return FALSE;
@@ -924,13 +926,20 @@ void set_encoder_src_caps (GstVideoEncoder *encoder, GstCaps *input_caps)
   GstStructure *src_caps_st, *input_caps_st;
   const GValue *framerate = NULL;
   GstCaps *src_caps = gst_caps_make_writable(gst_pad_get_current_caps(encoder->srcpad));
+  if (!src_caps) {
+    GST_WARNING_OBJECT(encoder, "No current caps available on srcpad");
+    return;
+  }
+
   src_caps_st = gst_caps_get_structure(src_caps, 0);
   input_caps_st = gst_caps_get_structure(input_caps, 0);
   framerate = gst_structure_get_value(input_caps_st, "framerate");
   if (framerate)
     gst_structure_set_value(src_caps_st, "framerate", framerate);
 
-  GST_DEBUG_OBJECT(encoder, "enc_src_caps: %s", gst_caps_to_string(src_caps));
+  gchar *caps_str = gst_caps_to_string(src_caps);
+  GST_DEBUG_OBJECT(encoder, "enc_src_caps: %s", caps_str);
+  g_free(caps_str);
   gst_pad_set_caps(encoder->srcpad, src_caps);
   gst_caps_unref(src_caps);
 }
@@ -946,10 +955,16 @@ reconfigure_fps (GstVideoEncoder *encoder, GstCaps *input_caps, guint label)
   gint curr_fps_n = 0, curr_fps_d = 0;
   gint new_fps_n = 0, new_fps_d = 0;
   gint ret = 0;
+  struct v4l2_streamparm parms;
 
   /*Check if current fps is same as in newly received caps */
   GstStructure *sink_pad_st, *input_caps_st;
   GstCaps *sink_caps = gst_pad_get_current_caps(encoder->sinkpad);
+  if (!sink_caps) {
+    GST_WARNING_OBJECT(encoder, "No current caps available on sinkpad");
+    return FALSE;
+  }
+
   sink_pad_st = gst_caps_get_structure(sink_caps, 0);
   input_caps_st = gst_caps_get_structure(input_caps, 0);
   gst_structure_get_fraction (sink_pad_st, "framerate", &curr_fps_n, &curr_fps_d);
@@ -960,8 +975,26 @@ reconfigure_fps (GstVideoEncoder *encoder, GstCaps *input_caps, guint label)
     enc_config.fps_d = new_fps_d;
   } else {
     GST_DEBUG_OBJECT(encoder, "No change in framerate");
+    gst_caps_unref(sink_caps);
+    return TRUE;
+  }
+
+  if (is_cuvid == FALSE) {
+    memset(&parms, 0, sizeof(parms));
+    parms.parm.output.timeperframe.numerator = new_fps_d;
+    parms.parm.output.timeperframe.denominator = new_fps_n;
+    parms.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
+
+    ret = v4l2object->ioctl (v4l2object->video_fd, VIDIOC_S_PARM, &parms);
+    if (ret < 0) {
+      GST_WARNING_OBJECT (encoder, "Error in reconfiguring fps\n");
+      gst_caps_unref(sink_caps);
+      return FALSE;
+    }
+    gst_caps_unref(sink_caps);
     return TRUE;
   }
+
   memset (&control, 0, sizeof (control));
   memset (&ctrls, 0, sizeof (ctrls));
 
@@ -975,9 +1008,11 @@ reconfigure_fps (GstVideoEncoder *encoder, GstCaps *input_caps, guint label)
   ret = v4l2object->ioctl (v4l2object->video_fd, VIDIOC_S_EXT_CTRLS, &ctrls);
   if (ret < 0) {
     GST_WARNING_OBJECT (encoder, "Error in reconfiguring fps\n");
+    gst_caps_unref(sink_caps);
     return FALSE;
   }
 
+  gst_caps_unref(sink_caps);
   return TRUE;
 }
 #endif
@@ -994,6 +1029,7 @@ gst_v4l2_video_enc_set_format (GstVideoEncoder * encoder,
 #ifdef USE_V4L2_TARGET_NV
   const gchar *mimetype;
   GstStructure *s;
+  gboolean drc = FALSE;
 #endif
 
   GST_DEBUG_OBJECT (self, "Setting format: %" GST_PTR_FORMAT, state->caps);
@@ -1006,17 +1042,15 @@ gst_v4l2_video_enc_set_format (GstVideoEncoder * encoder,
       }
     }
 #ifdef USE_V4L2_TARGET_NV
-    if (is_cuvid == TRUE) {
-      if (is_drc (encoder, state->caps)) {
-        /*TODO: Reset encoder to allocate new buffer size at encoder output plane*/
-      } else {
-        GST_DEBUG_OBJECT (self, "Not DRC. Reconfigure encoder with new fps if required");
-        if (!reconfigure_fps(encoder, state->caps, V4L2_CID_MPEG_VIDEOENC_RECONFIG_FPS))
-          GST_WARNING_OBJECT(self, "S_EXT_CTRLS for RECONFIG_FPS failed\n");
-        /* set encoder src caps */
-        set_encoder_src_caps(encoder, state->caps);
-        return TRUE;
-      }
+    if (is_drc (encoder, state->caps)) {
+      /*TODO: Reset encoder to allocate new buffer size at encoder output plane*/
+      drc = TRUE;
+    } else {
+      if (!reconfigure_fps(encoder, state->caps, V4L2_CID_MPEG_VIDEOENC_RECONFIG_FPS))
+        GST_WARNING_OBJECT(self, "S_EXT_CTRLS for RECONFIG_FPS failed\n");
+      /* set encoder src caps */
+      set_encoder_src_caps(encoder, state->caps);
+      return TRUE;
     }
 #endif
 
@@ -1028,6 +1062,12 @@ gst_v4l2_video_enc_set_format (GstVideoEncoder * encoder,
 
     gst_video_codec_state_unref (self->input_state);
     self->input_state = NULL;
+
+    if (drc == TRUE) {
+      g_print("Drc detected, reconfiguring encoder\n");
+      gst_v4l2_video_enc_close(encoder);
+      gst_v4l2_video_enc_open(encoder);
+    }
   }
 
   outcaps = gst_pad_get_pad_template_caps (encoder->srcpad);
-- 
2.25.1


diff --git a/gst-nvvidconv-1.0/gstnvvconv.c b/gst-nvvidconv-1.0/gstnvvconv.c
index 2b2d06e..3af32e6 100644
--- a/gst-nvvidconv-1.0/gstnvvconv.c
+++ b/gst-nvvidconv-1.0/gstnvvconv.c
@@ -1600,6 +1600,8 @@ gst_nvvconv_set_caps (GstBaseTransform * btrans, GstCaps * incaps,
   gint min, surf_count = 0;
   GstCapsFeatures *ift = NULL;
   GstCapsFeatures *oft = NULL;
+  int existing_from_width = 0;
+  int existing_from_height = 0;
 
   space = GST_NVVCONV (btrans);
 
@@ -1614,8 +1616,14 @@ gst_nvvconv_set_caps (GstBaseTransform * btrans, GstCaps * incaps,
   space->in_info = in_info;
   space->out_info = out_info;
 
+  existing_from_width = space->from_width;
+  existing_from_height = space->from_height;
   space->from_width = GST_VIDEO_INFO_WIDTH (&in_info);
   space->from_height = GST_VIDEO_INFO_HEIGHT (&in_info);
+  if (existing_from_width != space->from_width || existing_from_height != space->from_height) {
+    space->resolution_changed = TRUE;
+    GST_DEBUG_OBJECT (space, "Resolution changed: %d\n", space->resolution_changed);
+  }
 
   space->to_width = GST_VIDEO_INFO_WIDTH (&out_info);
   space->to_height = GST_VIDEO_INFO_HEIGHT (&out_info);
@@ -3271,7 +3279,11 @@ gst_nvvconv_transform (GstBaseTransform * btrans, GstBuffer * inbuf,
         }
 
         if (space->need_intersurf || space->do_scaling || space->flip_method) {
-          if (space->isurf_flag == TRUE && space->ibuf_count < 1) {
+          if ((space->isurf_flag == TRUE && space->ibuf_count < 1) || space->resolution_changed) {
+            if (space->output_interbuf.isurface) {
+              NvBufSurfaceDestroy(space->output_interbuf.isurface);
+              space->output_interbuf.isurface = NULL;
+            }
             input_params.params.width = space->to_width;
             input_params.params.height = space->to_height;
             input_params.params.layout = NVBUF_LAYOUT_PITCH;
@@ -3328,8 +3340,12 @@ gst_nvvconv_transform (GstBaseTransform * btrans, GstBuffer * inbuf,
           space->isurf_flag = TRUE;
         }
         if (space->need_intersurf || space->do_scaling || space->flip_method) {
-          if (space->isurf_flag == TRUE && space->ibuf_count < 1) {
-          /* TODO : Check for PayloadInfo.TimeStamp = gst_util_uint64_scale (GST_BUFFER_PTS (inbuf), GST_MSECOND * 10, GST_SECOND); */
+          if ((space->isurf_flag == TRUE && space->ibuf_count < 1) || space->resolution_changed) {
+            if (space->input_interbuf.isurface) {
+              NvBufSurfaceDestroy(space->input_interbuf.isurface);
+              space->input_interbuf.isurface = NULL;
+            }
+            /* TODO : Check for PayloadInfo.TimeStamp = gst_util_uint64_scale (GST_BUFFER_PTS (inbuf), GST_MSECOND * 10, GST_SECOND); */
             input_params.params.width = space->from_width;
             input_params.params.height = space->from_height;
             input_params.params.layout = NVBUF_LAYOUT_PITCH;
@@ -3458,7 +3474,11 @@ gst_nvvconv_transform (GstBaseTransform * btrans, GstBuffer * inbuf,
 
             NvBufSurfaceAllocateParams input_params = { 0 };
 
-            if (space->ibuf_count < 1) {
+            if (space->ibuf_count < 1 || space->resolution_changed) {
+              if (space->input_interbuf.isurface) {
+                NvBufSurfaceDestroy(space->input_interbuf.isurface);
+                space->input_interbuf.isurface = NULL;
+              }
               input_params.params.width = space->from_width;
               input_params.params.height = space->from_height;
               input_params.params.layout = NVBUF_LAYOUT_PITCH;
@@ -3483,7 +3503,11 @@ gst_nvvconv_transform (GstBaseTransform * btrans, GstBuffer * inbuf,
               space->ibuf_count += 1;
             }
 
-            if (space->ibuf_count < 2) {
+            if (space->ibuf_count < 2 || space->resolution_changed) {
+              if (space->output_interbuf.isurface) {
+                NvBufSurfaceDestroy(space->output_interbuf.isurface);
+                space->output_interbuf.isurface = NULL;
+              }
               input_params.params.width = space->to_width;
               input_params.params.height = space->to_height;
               input_params.params.layout = NVBUF_LAYOUT_PITCH;
@@ -3541,6 +3565,7 @@ gst_nvvconv_transform (GstBaseTransform * btrans, GstBuffer * inbuf,
 done:
   gst_buffer_unmap (inbuf, &inmap);
   gst_buffer_unmap (outbuf, &outmap);
+  space->resolution_changed = FALSE;
 
   return flow_ret;
 
diff --git a/gst-nvvidconv-1.0/gstnvvconv.h b/gst-nvvidconv-1.0/gstnvvconv.h
index f40a736..e575f75 100644
--- a/gst-nvvidconv-1.0/gstnvvconv.h
+++ b/gst-nvvidconv-1.0/gstnvvconv.h
@@ -1,19 +1,20 @@
 /*
- * Copyright (c) 2014-2022, NVIDIA CORPORATION. All rights reserved.
+ * SPDX-FileCopyrightText: Copyright (c) 2014-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+ * SPDX-License-Identifier: BSD-3-Clause
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions are met:
  *
  * 1. Redistributions of source code must retain the above copyright notice, this
- *    list of conditions and the following disclaimer.
+ * list of conditions and the following disclaimer.
  *
  * 2. Redistributions in binary form must reproduce the above copyright notice,
- *    this list of conditions and the following disclaimer in the documentation
- *    and/or other materials provided with the distribution.
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
  *
  * 3. Neither the name of the copyright holder nor the names of its
- *    contributors may be used to endorse or promote products derived from
- *    this software without specific prior written permission.
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
  *
  * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
  * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
@@ -210,6 +211,7 @@ struct _Gstnvvconv
   gboolean negotiated;
   gboolean nvfilterpool;
   gboolean enable_blocklinear_output;
+  gboolean resolution_changed;
 
   GstBufferPool *pool;
   GMutex flow_lock;
-- 
2.25.1


and rebuild/replace the libs:

/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvvideo4linux2.so
/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvvidconv.so

Hi, this patch seems to work! Thanks.

One suggestion from the GStreamer developers would be to check caps are NULL before making the writable, preventing a g_critical(). Here’s the updated patch for gstv4l2videoenc.c.

diff --git a/gstv4l2videoenc.c b/gstv4l2videoenc.c
index bbf2c23..b2c5ebd 100644
--- a/gstv4l2videoenc.c
+++ b/gstv4l2videoenc.c
@@ -912,8 +912,10 @@ gboolean is_drc (GstVideoEncoder *encoder, GstCaps *input_caps)
   gst_structure_get_int(input_caps_st, "height", &new_height);
 
   GST_INFO_OBJECT(encoder, "curr resolution: [%dx%d], new resolution: [%dx%d]", curr_width, curr_height, new_width, new_height);
-  if ((curr_width != new_width) || (curr_height != new_height))
+  if ((curr_width != new_width) || (curr_height != new_height)) {
+    gst_caps_unref(sink_caps);
     return TRUE;
+  }
 
   gst_caps_unref(sink_caps);
   return FALSE;
@@ -923,14 +925,21 @@ void set_encoder_src_caps (GstVideoEncoder *encoder, GstCaps *input_caps)
 {
   GstStructure *src_caps_st, *input_caps_st;
   const GValue *framerate = NULL;
-  GstCaps *src_caps = gst_caps_make_writable(gst_pad_get_current_caps(encoder->srcpad));
+  GstCaps *src_caps = gst_pad_get_current_caps(encoder->srcpad);
+  if (!src_caps) {
+    GST_WARNING_OBJECT(encoder, "No current caps available on srcpad");
+    return;
+  }
+  src_caps = gst_caps_make_writable(gst_pad_get_current_caps(encoder->srcpad));
   src_caps_st = gst_caps_get_structure(src_caps, 0);
   input_caps_st = gst_caps_get_structure(input_caps, 0);
   framerate = gst_structure_get_value(input_caps_st, "framerate");
   if (framerate)
     gst_structure_set_value(src_caps_st, "framerate", framerate);
 
-  GST_DEBUG_OBJECT(encoder, "enc_src_caps: %s", gst_caps_to_string(src_caps));
+  gchar *caps_str = gst_caps_to_string(src_caps);
+  GST_DEBUG_OBJECT(encoder, "enc_src_caps: %s", caps_str);
+  g_free(caps_str);
   gst_pad_set_caps(encoder->srcpad, src_caps);
   gst_caps_unref(src_caps);
 }
@@ -946,10 +955,16 @@ reconfigure_fps (GstVideoEncoder *encoder, GstCaps *input_caps, guint label)
   gint curr_fps_n = 0, curr_fps_d = 0;
   gint new_fps_n = 0, new_fps_d = 0;
   gint ret = 0;
+  struct v4l2_streamparm parms;
 
   /*Check if current fps is same as in newly received caps */
   GstStructure *sink_pad_st, *input_caps_st;
   GstCaps *sink_caps = gst_pad_get_current_caps(encoder->sinkpad);
+  if (!sink_caps) {
+    GST_WARNING_OBJECT(encoder, "No current caps available on sinkpad");
+    return FALSE;
+  }
+
   sink_pad_st = gst_caps_get_structure(sink_caps, 0);
   input_caps_st = gst_caps_get_structure(input_caps, 0);
   gst_structure_get_fraction (sink_pad_st, "framerate", &curr_fps_n, &curr_fps_d);
@@ -960,8 +975,26 @@ reconfigure_fps (GstVideoEncoder *encoder, GstCaps *input_caps, guint label)
     enc_config.fps_d = new_fps_d;
   } else {
     GST_DEBUG_OBJECT(encoder, "No change in framerate");
+    gst_caps_unref(sink_caps);
     return TRUE;
   }
+
+  if (is_cuvid == FALSE) {
+    memset(&parms, 0, sizeof(parms));
+    parms.parm.output.timeperframe.numerator = new_fps_d;
+    parms.parm.output.timeperframe.denominator = new_fps_n;
+    parms.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
+
+    ret = v4l2object->ioctl (v4l2object->video_fd, VIDIOC_S_PARM, &parms);
+    if (ret < 0) {
+      GST_WARNING_OBJECT (encoder, "Error in reconfiguring fps\n");
+      gst_caps_unref(sink_caps);
+      return FALSE;
+    }
+    gst_caps_unref(sink_caps);
+    return TRUE;
+  }
+
   memset (&control, 0, sizeof (control));
   memset (&ctrls, 0, sizeof (ctrls));
 
@@ -975,9 +1008,11 @@ reconfigure_fps (GstVideoEncoder *encoder, GstCaps *input_caps, guint label)
   ret = v4l2object->ioctl (v4l2object->video_fd, VIDIOC_S_EXT_CTRLS, &ctrls);
   if (ret < 0) {
     GST_WARNING_OBJECT (encoder, "Error in reconfiguring fps\n");
+    gst_caps_unref(sink_caps);
     return FALSE;
   }
 
+  gst_caps_unref(sink_caps);
   return TRUE;
 }
 #endif
@@ -994,6 +1029,7 @@ gst_v4l2_video_enc_set_format (GstVideoEncoder * encoder,
 #ifdef USE_V4L2_TARGET_NV
   const gchar *mimetype;
   GstStructure *s;
+  gboolean drc = FALSE;
 #endif
 
   GST_DEBUG_OBJECT (self, "Setting format: %" GST_PTR_FORMAT, state->caps);
@@ -1006,17 +1042,15 @@ gst_v4l2_video_enc_set_format (GstVideoEncoder * encoder,
       }
     }
 #ifdef USE_V4L2_TARGET_NV
-    if (is_cuvid == TRUE) {
-      if (is_drc (encoder, state->caps)) {
-        /*TODO: Reset encoder to allocate new buffer size at encoder output plane*/
-      } else {
-        GST_DEBUG_OBJECT (self, "Not DRC. Reconfigure encoder with new fps if required");
-        if (!reconfigure_fps(encoder, state->caps, V4L2_CID_MPEG_VIDEOENC_RECONFIG_FPS))
-          GST_WARNING_OBJECT(self, "S_EXT_CTRLS for RECONFIG_FPS failed\n");
-        /* set encoder src caps */
-        set_encoder_src_caps(encoder, state->caps);
-        return TRUE;
-      }
+    if (is_drc (encoder, state->caps)) {
+      /*TODO: Reset encoder to allocate new buffer size at encoder output plane*/
+      drc = TRUE;
+    } else {
+      if (!reconfigure_fps(encoder, state->caps, V4L2_CID_MPEG_VIDEOENC_RECONFIG_FPS))
+        GST_WARNING_OBJECT(self, "S_EXT_CTRLS for RECONFIG_FPS failed\n");
+      /* set encoder src caps */
+      set_encoder_src_caps(encoder, state->caps);
+      return TRUE;
     }
 #endif
 
@@ -1028,6 +1062,12 @@ gst_v4l2_video_enc_set_format (GstVideoEncoder * encoder,
 
     gst_video_codec_state_unref (self->input_state);
     self->input_state = NULL;
+
+    if (drc == TRUE) {
+      g_print("Drc detected, reconfiguring encoder\n");
+      gst_v4l2_video_enc_close(encoder);
+      gst_v4l2_video_enc_open(encoder);
+    }
   }
 
   outcaps = gst_pad_get_pad_template_caps (encoder->srcpad);
-- 
2.43.0

Hi,
Thanks for sharing the patch. Will check it with our teams.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.