Videorate plugin in the pipeline does not generate frames correctly

• Hardware Platform (Jetson / GPU) : NVIDIA Jetson AGX Orin
• DeepStream Version : 6.3
• JetPack Version (valid for Jetson only) : 5.1.2
• TensorRT Version : 8.5.2
• Issue Type( questions, new requirements, bugs) : questions

I would like to recreate this Gstreamer pipeline in Python using DeepStream SDK:

gst-launch-1.0 filesrc location='my_video.h264' ! "video/x-h264, width=1920, height=1080, framerate=60/1" ! h264parse ! nvv4l2decoder | nvstreammux width=1920 height=1080 batch-size=1 batched-push-timeout=4000000 ! nvvidconv ! tee name=t ! nvv4l2h265enc iframeinterval=60 ! h265parse ! splitmuxsink location=h265_data/video_R%02d.h265 max-size-time=1000000000 t. ! videorate ! videoconvert ! 'video/x-raw(memory:NVMM), format=(string)I420, framerate=1/1' ! nvv4l2h264enc iframeinterval=1 ! h264parse ! splitmuxsink location=h264_data/video_h264_P%02d.h264 max-size-time=1000000000

I recreated it in this way:

# Initialize GStreamer
Gst.init(None)

# Create GStreamer pipeline
pipeline = Gst.Pipeline()
if not pipeline:
    sys.stderr.write("Unable to create Pipeline\n")

# Source element for reading from file
input_source = create_pipeline_element('filesrc', 'source', 'Source')
input_source.set_property('location', args[1])

# Create caps filter to set the width, height and framerate of the input
input_caps = create_pipeline_element("capsfilter", "input-caps", "Input Caps")
input_caps.set_property("caps", Gst.Caps.from_string("video/x-h264, width=1920, height=1080, framerate=60/1"))

# Create h264 parser
input_parser = create_pipeline_element("h264parse", "h264-parser", "H264 Parser")

# Create nvv4l2decoder
input_decoder = create_pipeline_element("nvv4l2decoder", "nvv4l2-decoder", "Nvv4l2 Decoder")

# Create streammux
streammux = create_pipeline_element("nvstreammux", "streammux", "Streammux")
streammux.set_property("width", 1920)
streammux.set_property("height", 1080)
streammux.set_property("batch-size", 1)
streammux.set_property("batched-push-timeout", 4000000)

# Create tee
tee = create_pipeline_element("tee", "tee", "Main Tee")

# ------------------------------------------------
# Create branch for h265 elements
# Create h265 encoder
h265_encoder = self.__create_element_or_print_err('nvv4l2h265enc', 'h265_encoder', 'h265 Encoder')
h265_encoder.set_property('iframeinterval', 60)
h265_elements.append(h265_encoder)

# Create h265 parser
h265_parser = self.__create_element_or_print_err('h265parse', 'h265_parser', 'h265 Parser')
h265_elements.append(h265_parser)

# Create h265 split muxer sink
h265_muxer = self.__create_element_or_print_err('splitmuxsink', 'h265_muxer', 'h265 Muxer')
h265_muxer.set_property('location', 'h265_data/video_R%02d.h265')
h265_muxer.set_property('max-size-time', 1000000000)
h265_elements.append(h265_muxer)

# ------------------------------------------------
# Create branch for h264 elements
# Create h264 videorate
h264_videorate = self.__create_element_or_print_err('videorate', 'h264_videorate', 'h264 Videorate')
h264_videorate.set_property('rate', 1)
h264_videorate.set_property('max-rate', 1)
h264_elements.append(h264_videorate)

# Create h264 videoconvert
h264_nvvideoconvert = self.__create_element_or_print_err('nvvideoconvert', 'h264_nvvideoconvert', 'h264 Videoconvert')
h264_elements.append(h264_nvvideoconvert)

# Create h264 encoder
h264_encoder = self.__create_element_or_print_err('nvv4l2h264enc', 'h264_encoder', 'h264 Encoder')
h264_encoder.set_property('iframeinterval', 1)
h264_elements.append(h264_encoder)

# Create h264 parser
h264_parser = self.__create_element_or_print_err('h264parse', 'h264_parser', 'h264 Parser')
h264_elements.append(h264_parser)

# Create h264 split muxer sink
h264_muxer = self.__create_element_or_print_err('splitmuxsink', 'h264_muxer', 'h264 Muxer')
h264_muxer.set_property('location', 'h264_data/video_h264_P%02d.h264')
h264_muxer.set_property('max-size-time', 1000000000)
h264_elements.append(h264_muxer)

# ------------------------------------------------
# Add elements to pipeline
pipeline.add(input_source)
pipeline.add(input_caps)
pipeline.add(input_parser)
pipeline.add(input_decoder)
pipeline.add(streammux)
pipeline.add(tee)

# Add h265 elements to pipeline
tee.add(h265_encoder)
tee.add(h265_parser)
tee.add(h265_muxer)

# Add h264 elements to pipeline
tee.add(h264_videorate)
tee.add(h264_nvvideoconvert)
tee.add(h264_encoder)
tee.add(h264_parser)
tee.add(h264_muxer)

# ------------------------------------------------
# Link elements in pipeline
input_source.link(input_caps)
input_caps.link(input_parser)
input_parser.link(input_decoder)

input_decoder_srcpad = input_decoder.get_static_pad("src")
if not input_decoder_srcpad:
    sys.stderr.write("Unable to get src pad of decoder\n")

streammux_sinkpad = streammux.get_request_pad("sink_0")
if not streammux_sinkpad:
    sys.stderr.write("Unable to get sink pad of streammux\n")

input_decoder_srcpad.link(streammux_sinkpad)
streammux.link(tee)

# Link h265 elements
tee.link(h265_encoder)
h265_encoder.link(h265_parser)
h265_parser.link(h265_muxer)

# Link h264 elements
tee.link(h264_videorate)
h264_videorate.link(h264_nvvideoconvert)
h264_nvvideoconvert.link(h264_encoder)
h264_encoder.link(h264_parser)
h264_parser.link(h264_muxer)

# ----------------------------
# Create an event loop and feed gstreamer bus messages to it
print(f"Playing file {args[1]}\n")
loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)

# Start pipeline
print("Starting pipeline\n")
pipeline.set_state(Gst.State.PLAYING)
try:
    loop.run()
except:
    pass

# cleanup
pipeline.set_state(Gst.State.NULL)

PROBLEM
The problem that occurs here is in h264 elements branch because of the h264_videorate which does not generate frames correctly. The code after run generates only one h265 and h264 frame and it does not generate them further, it stops. When I comment out h264_videorate it works fine, frames are generated every second, but the generated h264 elements do not have frame rate of 1.

What is wrong in the h264 pipeline? H265 pipeline works perfectly fine, it generetaes the frames that it should generate but in h264 pipeline it seems to be an error somewhere but I cannot figure out where.

There are also missing caps in h264 pipeline according to this … ! ‘video/x-raw(memory:NVMM), format=(string)I420, framerate=1/1’ ! … but I wanted to set the framerate in videorate which does not work. Is it correct or should I create caps to set format and framerate instead of setting it in videorate?

1 Like

please refer to this pipeline, which can convert the fps from 30 to 1.

gst-launch-1.0 -v filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 ! qtdemux ! h264parse ! nvv4l2decoder !  videorate ! capsfilter caps="video/x-raw(memory:NVMM),framerate=1/1, format=NV12" ! nv3dsink

It still does not work in my case.
What i have right now after tee is this:

... ! videorate ! capsfilter 'video/x-raw(memory:NVMM), framerate=1/1, format=NV12' ! nvv4l2h264enc iframeinterval=1 ! h264parse ! splitmuxsink location='h264_data/video_h264_P%02d.h264' max-size-time=1000000000

When I incorporated Your pipeline into mine, which after tee looks like this:

... ! videorate ! capsfilter 'video/x-raw(memory:NVMM), framerate=1/1, format=NV12' ! nv3dsink

it works but I want to save this output to h264 file, not display on screem and it does not work. Pipeline 1. saves first h264 and h265 file and then stops saving them but the code runs.

  1. to narrow down this issue, please make sure each separate branch can work well.
  2. please refer to this pipeline, which support videorate and nvv4l2h264enc. then port to you application.
gst-launch-1.0 -v filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 ! qtdemux ! h264parse ! nvv4l2decoder !  videorate ! capsfilter caps="video/x-raw(memory:NVMM),framerate=1/1, format=NV12" ! nvv4l2h264enc bitrate=1000000 ! filesink location=test.264

Ok so what I observed is:

  1. After tee i have just 2 pipelines:
    • h265 pipeline: alone without h264 pipeline, it works perfectly fine
    • h264 pipeline: alone without h265 pipeline, it works, it saves frames to different files but the last file seems to have 2 frames with framerate=1 instead of saving it to 2 different files

When i turn on both pipelines the code does not work. It generates two files .h264 and .h265 and then stops generating them at all but the code runs in terminal. It is strange that both pipelines when are separate work fine. But when i connect them to the tee in the same way, the whole pipeline stops working.

It seems that the pipelines when run together block each other.

Maybe I could share my files to recreate the issue?

  1. could you use “gst-launch-1.0 -v filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 ! qtdemux ! h264parse ! nvv4l2decoder” as source? wondering if the issue is related to the source.
  2. if still can’t work, could you share the current whole pipeline?

It still does not work with nvidia sample. Here is my Gstreamer pipeline with Nvidia sample (the difference is that the source is h264 file):

gst-launch-1.0 -v filesrc location=/opt/nvidia/deepstream/deepstream-6.3/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m width=1920 height=1080 batch-size=1 batched-push-timeout=4000000 ! tee name=t t. ! nvv4l2h265enc iframeinterval=60 ! h265parse ! splitmuxsink location=/tmp/data/video_R%02d.h265 max-size-time=1000000000 t. ! videorate ! capsfilter caps="video/x-raw(memory:NVMM), framerate=1/1, format=NV12" ! nvvideoconvert ! nvv4l2h264enc iframeinterval=1 ! h264parse ! splitmuxsink location=/tmp/data/video_R%02d.h264 max-size-time=1000000000

When You try to run tee branches separately it works, but together it seems to be blocking one another. When removing videorate the code works but generates h264 files wrongly. h264 files should have framerate=1 and look like snapshots/photos of each second of the video. Also each h264 file should be saved in a new file instead of beaing saved in one.

Maybe the pipeline itself is incorret?

Here is also Python code that recreates this pipeline 1:1

import sys
import gi
import sys

gi.require_version("Gst", "1.0")
from gi.repository import GLib, Gst


def bus_call(bus, message, loop):
    t = message.type
    if t == Gst.MessageType.EOS:
        sys.stdout.write("End-of-stream\n")
        loop.quit()
    elif t == Gst.MessageType.WARNING:
        err, debug = message.parse_warning()
        sys.stderr.write("Warning: %s: %s\n" % (err, debug))
    elif t == Gst.MessageType.ERROR:
        err, debug = message.parse_error()
        sys.stderr.write("Error: %s: %s\n" % (err, debug))
        loop.quit()
    return True


def create_element_or_print_err(factory_name, element_name, print_name):
    print(f"Creating {print_name}\n")

    elm = Gst.ElementFactory.make(factory_name, element_name)
    if not elm:
        sys.stderr.write(f"Unable to create {print_name}\n")

    return elm


def main(args):
    if len(args) != 2:
        sys.stderr.write('Wrong number of arguments. Provide file source>\n')
        sys.exit(1)

    # Initialize GStreamer
    Gst.init(None)

    # Create GStreamer pipeline
    pipeline = Gst.Pipeline()
    if not pipeline:
        sys.stderr.write("Unable to create Pipeline\n")

    # Create elements
    # Source element for reading from h264 file
    input_source = create_element_or_print_err('filesrc', 'source', 'Source')
    input_source.set_property('location', args[1])

    # Create h264 parser
    input_parser = create_element_or_print_err("h264parse", "h264-parser", "H264 Parser")

    # Create nvv4l2decoder
    input_decoder = create_element_or_print_err("nvv4l2decoder", "nvv4l2-decoder", "Nvv4l2 Decoder")

    # Create streammux
    streammux = create_element_or_print_err("nvstreammux", "streammux", "Streammux")
    streammux.set_property("width", 1920)
    streammux.set_property("height", 1080)
    streammux.set_property("batch-size", 1)
    streammux.set_property("batched-push-timeout", 4000000)

    # Create tee for pandas as ML pipelines
    tee = create_element_or_print_err("tee", "tee", "Main Tee")

    # ----------------------------------------
    # Create branch for h265 elements with iframeinterval=60
    # Create h265 encoder
    h265_encoder = create_element_or_print_err('nvv4l2h265enc', 'h265_encoder', 'h265 Encoder')
    h265_encoder.set_property('iframeinterval', 60)

    # Create h265 parser
    h265_parser = create_element_or_print_err('h265parse', 'h265_parser', 'h265 Parser')

    # Create h265 split muxer sink
    h265_sink = create_element_or_print_err('splitmuxsink', 'h265_muxer', 'h265 Muxer')
    h265_sink.set_property('location', '/tmp/data/video_R%02d.h265')
    h265_sink.set_property('max-size-time', 1000000000)

    # ----------------------------------------
    # Create branch to h264 elements with iframeinterval=1 and framerate=1/1
    # Create h264 videorate
    h264_videorate = create_element_or_print_err('videorate', 'h264_videorate', 'h264 Videorate')

    # Create caps filter for videorate
    h264_caps = create_element_or_print_err('capsfilter', 'h264_caps', 'h264 Caps')
    h264_caps.set_property('caps', Gst.Caps.from_string("video/x-raw(memory:NVMM), framerate=1/1, format=NV12"))

    # Create h264 videoconvert
    h264_nvvideoconvert = create_element_or_print_err('nvvideoconvert', 'h264_nvvideoconvert',
                                                      'h264 Videoconvert')

    # Create h264 encoder
    h264_encoder = create_element_or_print_err('nvv4l2h264enc', 'h264_encoder', 'h264 Encoder')
    h264_encoder.set_property('iframeinterval', 1)

    # Create h264 parser
    h264_parser = create_element_or_print_err('h264parse', 'h264_parser', 'H264 Parser')

    # Crete filesink
    h264_sink = create_element_or_print_err('splitmuxsink', 'h264fds_muxer', 'h264fsd Muxer')
    h264_sink.set_property('location', '/tmp/data/video_R%02d.h264')
    h264_sink.set_property("max-size-time", 1000000000)

    pipeline.add(input_source)
    pipeline.add(input_parser)
    pipeline.add(input_decoder)
    pipeline.add(streammux)
    pipeline.add(tee)
    pipeline.add(h265_encoder)
    pipeline.add(h265_parser)
    pipeline.add(h265_sink)
    pipeline.add(h264_videorate)
    pipeline.add(h264_caps)
    pipeline.add(h264_nvvideoconvert)
    pipeline.add(h264_encoder)
    pipeline.add(h264_parser)
    pipeline.add(h264_sink)

    # Link elements
    input_source.link(input_parser)
    input_parser.link(input_decoder)

    input_decoder_srcpad = input_decoder.get_static_pad("src")
    if not input_decoder_srcpad:
        sys.stderr.write("Unable to get src pad of decoder\n")

    streammux_sinkpad = streammux.get_request_pad("sink_0")
    if not streammux_sinkpad:
        sys.stderr.write("Unable to get sink pad of streammux\n")

    input_decoder_srcpad.link(streammux_sinkpad)
    streammux.link(tee)

    # Link h265 branch elements
    tee.link(h265_encoder)
    h265_encoder.link(h265_parser)
    h265_parser.link(h265_sink)

    # Link h264 branch elements
    tee.link(h264_videorate)
    h264_videorate.link(h264_caps)
    h264_caps.link(h264_nvvideoconvert)
    h264_nvvideoconvert.link(h264_encoder)
    h264_encoder.link(h264_parser)
    h264_parser.link(h264_sink)

    print(f"Playing file {args[1]}\n")
    loop = GLib.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", bus_call, loop)

    # Start pipeline
    print("Starting pipeline\n")
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass

    # cleanup
    pipeline.set_state(Gst.State.NULL)


if __name__ == '__main__':
    sys.exit(main(sys.argv))

after videorate the timestamp will be changed. please replace max-size-time=1000000000 with max-size-bytes=10000000.

I changed max-size-time to max-size-bytes=10000000 in splitmuxsink in h264 branch but there is still the same problem. Only 2 files are generated and both of them are empty, they are not even video files. The code in terminal still runs but files are not generated.
screen

I used the following cmd to test on DGPU with DS7.0. there will be 3 h264 file. all files are smaller than 10000000 and can be played well.
ll *.h264
-rw-r–r-- 1 root root 9681444 Jul 26 08:24 video_R00.h264
-rw-r–r-- 1 root root 9584248 Jul 26 08:24 video_R01.h264
-rw-r–r-- 1 root root 7000851 Jul 26 08:24 video_R02.h264

gst-launch-1.0 -v filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m width=1920 height=1080 batch-size=1 batched-push-timeout=4000000 ! tee name=t t. ! nvv4l2h265enc iframeinterval=60 ! h265parse ! splitmuxsink location=video_R%02d.h265 max-size-time=1000000000 t. ! videorate ! capsfilter caps="video/x-raw(memory:NVMM), framerate=1/1, format=NV12" ! nvvideoconvert ! nvv4l2h264enc iframeinterval=1 ! h264parse ! splitmuxsink location=video_R%02d.h264 max-size-bytes=10000000

I copied Your cmd and well, it does not work in my case. Only 2 files with 0 bytes are generated. Could it be because of DS version? Mine is 6.3 and yours is 7.

Is there any way around this could work on 6.3 version of DS? What i basically want to achieve is generate 2 separate files in the same time, one is h265 which consists of 1 second video and h264 which is a snapshot/photo with framerate=1.

or could it be because of 2 encoders that work simultaneously and block each other?

Installing DeepStream version 7.0 resolved the issue, albeit necessitating the installation of Jetpack 6.0 (previously Jetpack 5.3), which effectively required a system flush.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.