Save DeepStream output as video file

• Hardware Platform (Jetson / GPU) Jetson Xavier NX
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) R32 Revision: 5.0 GCID: 25531747 Board: t186ref
• TensorRT Version 7.1.3 + CUDA 10.2

• Issue Type( questions, new requirements, bugs) Question about saving the inferred output file (from RTSP stream_ every 1 hr

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) please see below

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) please see below

Hi Deepstream team,

Following this code and other answers on the forum, I was able to get the app to save file.

Basically, this is what I did. I would create Gst elements as shown below…

if (output_mp4):
    queue_sink = Gst.ElementFactory.make("queue", "queue_sink")
    nvvidconv_sink = Gst.ElementFactory.make(
        "nvvideoconvert", "nvvidconv_sink")
    caps_filter = Gst.ElementFactory.make("capsfilter", "caps-filter")
    caps_filter_sink = Gst.ElementFactory.make(
        "video/x-raw(memory:NVMM), format=I420")
    caps_filter.set_property('caps', caps_filter_sink)

    encoder = Gst.ElementFactory.make("nvv4l2h264enc", "h264-encoder")
    encoder.set_property('bitrate', 2000000)

    h264parse = Gst.ElementFactory.make("h264parse", "h264-parse")
    muxer = Gst.ElementFactory.make("matroskamux", "muxer")
    sink = Gst.ElementFactory.make("filesink", "nvvideo-renderer")
    sink.set_property('location', "/home/z5-lpr/Downloads/output.h264")
else:
    if not monitor:
        print("Creating FakeSink \n")
        sink = Gst.ElementFactory.make("fakesink", "fakesink")
        if not sink:
            sys.stderr.write(" Unable to create fakesink \n")

        sink.set_property("qos", 0)

        if is_aarch64():
            print("Creating transform \n ")
            transform = Gst.ElementFactory.make(
                "queue", "nvegl-transform")
            if not transform:
                sys.stderr.write(" Unable to create transform \n")

    else:
        # Finally render the osd output
        print("Creating EGLSink \n")
        sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
        if not sink:
            sys.stderr.write(" Unable to create egl sink \n")

        sink.set_property("qos", 0)

        if is_aarch64():
            print("Creating transform \n ")
            transform = Gst.ElementFactory.make(
                "nvegltransform", "nvegl-transform")
            if not transform:
                sys.stderr.write(" Unable to create transform \n")

then, i would link/add them up

if output_mp4:
    nvosd.link(queue9)
    queue9.link(queue_sink)

    queue_sink.link(queue10)
    queue10.link(nvvidconv_sink)

    nvvidconv_sink.link(queue11)
    queue11.link(caps_filter)

    caps_filter.link(queue12)
    queue12.link(encoder)

    encoder.link(queue13)
    queue13.link(h264parse)

    h264parse.link(queue14)
    queue14.link(muxer)

    muxer.link(queue15)
    queue15.link(sink)
else:
    if is_aarch64():
        nvosd.link(queue9)
        queue9.link(transform)
        transform.link(sink)

After that I am just doing the following to get the app going -

# create an event loop and feed gstreamer bus mesages to it
loop = GObject.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)

# # Lets add probe to get informed of the meta data generated, we add probe to
# # the sink pad of the osd element, since by that time, the buffer would have
# # had got all the metadata.
osdsinkpad = nvosd.get_static_pad("sink")
if not osdsinkpad:
    sys.stderr.write(" Unable to get sink pad of nvosd \n")
osdsinkpad.add_probe(Gst.PadProbeType.BUFFER,
                     osd_sink_pad_buffer_probe, 0)

osdsinkpad = nvdsanalytics.get_static_pad("src")
if not osdsinkpad:
    sys.stderr.write(" Unable to get src pad of nvdsanalytics \n")
osdsinkpad.add_probe(Gst.PadProbeType.BUFFER,
                     nvdsanalytics_src_pad_buffer_probe, 0)

# List the sources
print("Now playing...")
for i, source in enumerate(args):
    if (i != 0):
        print(i, ": ", source)

print("Starting pipeline \n")
# start play back and listed to events
pipeline.set_state(Gst.State.PLAYING)
try:
    loop.run()
except:
    pass
# cleanup
print("Exiting app\n")
pipeline.set_state(Gst.State.NULL)

Since I want to save the output video every 1 hr, I was trying to figure out how I could possible do that. As I am learning Gstreamer in general, I wasn’t entirely sure how I could “break” and save. Would the team have a pointer on where to insert the time based saving? That way i would get a file like… output_032221_01_00.h264, output_032221_02_00.h264, etc.

Would appreciate your pointer.

To end the streaming, there are two ways:

  1. Stop the pipeline with setting state to NULL.
  2. Stop streaming by sending EOS to sink remove sink from pipeline.

These are basic gstreamer usages. Please refer to https://gstreamer.freedesktop.org/

Hi @a428tm … If you were able to fit this hourly dumps into your pipeline can you share a snippet of where you had employed the check for an hour and dumped it?