Save output to video using filesink

I am using Docker image from nvcr.io/nvidia/deepstream:5.0.1-20.09-devel

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 11.1
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi, I am trying to save the deepstream output to a mp4 video file using PYTHON API. There is not any example of how to implement this in the reference applications and I am a bit lost. I am using a headless server so can’t use nvdsosd component.

What is the exact problem? What is the pipeline you are using? Can you share your codes?

The exact problem is painting bounding boxes using nvosd on a headless server (no graphical interface) and save output into a video file.

Are you familiar with gstreamer? Are you familiar with gstreamer python?

https://gstreamer.freedesktop.org/bindings/python.html

Yes, I am

I am referring to this thread Python in DeepStream: error {Internal data stream error} while running deepstream-test1

I am sharing same error with thread author:

error {Internal data stream error} while running deepstream-test1

It works when I deactivate OSD and unset DISPLAY in docker image. But this is the only way I can make it work.

Note I added a filesink to save video output to a video file

I think the sample codes in error {Internal data stream error} while running deepstream-test1 has already implemented what you want.

So what is your problem with OSD enabled?

Can the following command run successfully on your platform?

gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvinfer config-file-path=dstest1_pgie_config.txt ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=I420’ ! nvv4l2h264enc ! h264parse ! mux.video_0 qtmux name=mux ! filesink location=test.mp4

Yes, that’s working properly

I have tried to run sample code from Python in DeepStream: error {Internal data stream error} while running deepstream-test1 and adapt it a little bit to sample apps and your gstream command but it gets stucked

def osd_sink_pad_buffer_probe(pad,info,u_data):
    frame_number=0
    #Intiallizing object counter with 0.
    obj_counter = {
        PGIE_CLASS_ID_VEHICLE:0,
        PGIE_CLASS_ID_PERSON:0,
        PGIE_CLASS_ID_BICYCLE:0,
        PGIE_CLASS_ID_ROADSIGN:0
    }
    num_rects=0

    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.glist_get_nvds_frame_meta()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            #frame_meta = pyds.glist_get_nvds_frame_meta(l_frame.data)
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        frame_number=frame_meta.frame_num
        num_rects = frame_meta.num_obj_meta
        l_obj=frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                # Casting l_obj.data to pyds.NvDsObjectMeta
                #obj_meta=pyds.glist_get_nvds_object_meta(l_obj.data)
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break
            obj_counter[obj_meta.class_id] += 1
            obj_meta.rect_params.border_color.set(0.0, 0.0, 1.0, 0.0)
            try: 
                l_obj=l_obj.next
            except StopIteration:
                break

        # Acquiring a display meta object. The memory ownership remains in
        # the C code so downstream plugins can still access it. Otherwise
        # the garbage collector will claim it when this probe function exits.
        display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)
        display_meta.num_labels = 1
        py_nvosd_text_params = display_meta.text_params[0]
        # Setting display text to be shown on screen
        # Note that the pyds module allocates a buffer for the string, and the
        # memory will not be claimed by the garbage collector.
        # Reading the display_text field here will return the C address of the
        # allocated string. Use pyds.get_string() to get the string content.
        py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])

        # Now set the offsets where the string should appear
        py_nvosd_text_params.x_offset = 10
        py_nvosd_text_params.y_offset = 12

        # Font , font-color and font-size
        py_nvosd_text_params.font_params.font_name = "Serif"
        py_nvosd_text_params.font_params.font_size = 10
        # set(red, green, blue, alpha); set to White
        py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

        # Text background color
        py_nvosd_text_params.set_bg_clr = 1
        # set(red, green, blue, alpha); set to Black
        py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
        # Using pyds.get_string() to get display_text as string
        print(pyds.get_string(py_nvosd_text_params.display_text))
        pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
        try:
            l_frame=l_frame.next
        except StopIteration:
            break
			
    return Gst.PadProbeReturn.OK	


def main(args):
    # Check input arguments
    if len(args) != 2:
        sys.stderr.write("usage: %s <media file or uri>\n" % args[0])
        sys.exit(1)

    # Standard GStreamer initialization
    GObject.threads_init()
    Gst.init(None)

    # Create gstreamer elements
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    # Source element for reading from the file
    print("Creating Source \n ")
    source = Gst.ElementFactory.make("filesrc", "file-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")

    # Since the data format in the input file is elementary h264 stream,
    # we need a h264parser
    print("Creating H264Parser \n")
    h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    if not h264parser:
        sys.stderr.write(" Unable to create h264 parser \n")

    # Use nvdec_h264 for hardware accelerated decode on GPU
    print("Creating Decoder \n")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

    # Create nvstreammux instance to form batches from one or more sources.
    print("Creating muxer \n")
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    # Use nvinfer to run inferencing on decoder's output,
    # behaviour of inferencing is set through config file
    print("Creating nvinfer \n")
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")
    # Use convertor to convert from NV12 to RGBA as required by nvosd
    print("Creating converter \n")
    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    # Create OSD to draw on the converted RGBA buffer
    print("Creating OSD\n")
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")

    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    if is_aarch64():
       transform = Gst.ElementFactory.make("nvegltransform", "nvegl-transform")

    print("Creating Queue \n")
    queue = Gst.ElementFactory.make("queue", "queue")
    if not queue:
        sys.stderr.write(" Unable to create queue \n")

    print("Creating converter 2\n")
    nvvidconv2 = Gst.ElementFactory.make("nvvideoconvert", "convertor2")
    if not nvvidconv2:
        sys.stderr.write(" Unable to create nvvidconv2 \n")

    print("Creating capsfilter \n")
    capsfilter = Gst.ElementFactory.make("capsfilter", "capsfilter")
    if not capsfilter:
        sys.stderr.write(" Unable to create capsfilter \n")

    caps = Gst.Caps.from_string("video/x-raw, format=I420")
    capsfilter.set_property("caps", caps)

    print("Creating Encoder \n")
    encoder = Gst.ElementFactory.make("avenc_mpeg4", "encoder")
    if not encoder:
        sys.stderr.write(" Unable to create encoder \n")

    encoder.set_property("bitrate", 2000000)

    print("Creating Code Parser \n")
    codeparser = Gst.ElementFactory.make("mpeg4videoparse", "mpeg4-parser")
    if not codeparser:
        sys.stderr.write(" Unable to create code parser \n")

    print("Creating Container \n")
    container = Gst.ElementFactory.make("qtmux", "qtmux")
    if not container:
        sys.stderr.write(" Unable to create code parser \n")

    print("Creating Sink \n")
    sink = Gst.ElementFactory.make("filesink", "filesink")
    if not sink:
        sys.stderr.write(" Unable to create file sink \n")

    sink.set_property("location", "./out.mp4")
    sink.set_property("sync", 1)
    sink.set_property("async", 0)

    print("Playing file %s " %args[1])
    source.set_property('location', args[1])
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', 1)
    streammux.set_property('batched-push-timeout', 4000000)
    pgie.set_property('config-file-path', "dstest1_pgie_config.txt")

    print("Adding elements to Pipeline \n")
    pipeline.add(source)
    pipeline.add(h264parser)
    pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(pgie)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(capsfilter)
    pipeline.add(encoder)
    pipeline.add(codeparser)
    pipeline.add(container)
    pipeline.add(sink)
    pipeline.add(queue)
    pipeline.add(nvvidconv2)
    
    if is_aarch64():
       pipeline.add(transform)

    # we link the elements together
    # file-source -> h264-parser -> nvh264-decoder ->
    # nvinfer -> nvvidconv -> nvosd -> video-renderer
    print("Linking elements in the Pipeline \n")
    source.link(h264parser)
    h264parser.link(decoder)

    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
    srcpad = decoder.get_static_pad("src")
    if not srcpad:
        sys.stderr.write(" Unable to get source pad of decoder \n")
    srcpad.link(sinkpad)
    streammux.link(pgie)
    pgie.link(nvvidconv)
    nvvidconv.link(nvosd)
    
    
    if is_aarch64():
        nvosd.link(transform)
        transform.link(sink)
    else:
        nvosd.link(sink)


    nvosd.link(queue)
    queue.link(nvvidconv2)
    nvvidconv2.link(capsfilter)
    capsfilter.link(encoder)
    encoder.link(codeparser)
    codeparser.link(container)
    container.link(sink)

# create an event loop and feed gstreamer bus mesages to it
    loop = GObject.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)

    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")

    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

    # start play back and listen to events
    print("Starting pipeline \n")
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    # cleanup
    pipeline.set_state(Gst.State.NULL)

Do you want to encode MPEG4(AVC) stream in your MP4 file?

Yes that would be ok, I just try to replicate code from deepstream C app.

The pipeline is OK.

gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvinfer config-file-path=dstest1_pgie_config.txt ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! ‘video/x-raw,format=I420’ ! avenc_mpeg4 bitrate=2000000 ! mpeg4videoparse ! mux.video_0 qtmux name=mux ! filesink location=test.mp4

I know but it does not work when using python callbacks for osd metada as in python examples… see code I attached above

Your code is wrong. Please try deepstream_test_1.py.txt (11.3 KB)

1 Like

It gets stucked… this is the last message I see.

0:00:11.561231398 10 0x256fd30 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully

This is what I see when enabling GST DEBUG=3

0:00:11.628196714    10      0x2cbad20 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
    0:00:11.629349903    10      0x2cbad20 WARN                 basesrc gstbasesrc.c:3583:gst_base_src_start_complete:<file-source> pad not activated yet
    0:00:11.671701152    10      0x23e1b70 WARN               h264parse gsth264parse.c:1197:gst_h264_parse_handle_frame:<h264-parser> input stream is corrupt; it contains a NAL unit of length 0
    0:00:11.671788915    10      0x23e1b70 WARN               h264parse gsth264parse.c:1197:gst_h264_parse_handle_frame:<h264-parser> input stream is corrupt; it contains a NAL unit of length 0
    0:00:11.671836286    10      0x23e1b70 WARN               h264parse gsth264parse.c:1237:gst_h264_parse_handle_frame:<h264-parser> broken/invalid nal Type: 0 Unknown, Size: 66 will be dropped
    0:00:11.671875830    10      0x23e1b70 WARN               h264parse gsth264parse.c:1237:gst_h264_parse_handle_frame:<h264-parser> broken/invalid nal Type: 0 Unknown, Size: 8 will be dropped
    0:00:11.671918164    10      0x23e1b70 WARN               h264parse gsth264parse.c:1197:gst_h264_parse_handle_frame:<h264-parser> input stream is corrupt; it contains a NAL unit of length 0
    0:00:11.671968304    10      0x23e1b70 WARN               h264parse gsth264parse.c:1237:gst_h264_parse_handle_frame:<h264-parser> broken/invalid nal Type: 0 Unknown, Size: 42 will be dropped
    0:00:11.672012088    10      0x23e1b70 WARN               h264parse gsth264parse.c:1237:gst_h264_parse_handle_frame:<h264-parser> broken/invalid nal Type: 0 Unknown, Size: 4 will be dropped
    0:00:11.672055637    10      0x23e1b70 WARN               h264parse gsth264parse.c:1237:gst_h264_parse_handle_frame:<h264-parser> broken/invalid nal Type: 0 Unknown, Size: 103 will be dropped
    0:00:11.672102906    10      0x23e1b70 WARN               h264parse gsth264parse.c:1237:gst_h264_parse_handle_frame:<h264-parser> broken/invalid nal Type: 0 Unknown, Size: 24 will be dropped
    0:00:11.672148816    10      0x23e1b70 WARN               h264parse gsth264parse.c:1237:gst_h264_parse_handle_frame:<h264-parser> broken/invalid nal Type: 0 Unknown, Size: 8 will be dropped
    0:00:11.672185948    10      0x23e1b70 WARN               h264parse gsth264parse.c:1237:gst_h264_parse_handle_frame:<h264-parser> broken/invalid nal Type: 0 Unknown, Size: 16 will be dropped
    0:00:11.672220767    10      0x23e1b70 WARN               h264parse gsth264parse.c:1237:gst_h264_parse_handle_frame:<h264-parser> broken/invalid nal Type: 0 Unknown, Size: 8 will be dropped
    0:00:11.672263194    10      0x23e1b70 WARN               h264parse gsth264parse.c:1237:gst_h264_parse_handle_frame:<h264-parser> broken/invalid nal Type: 0 Unknown, Size: 26 will be dropped
    0:00:11.672313123    10      0x23e1b70 WARN               h264parse gsth264parse.c:1237:gst_h264_parse_handle_frame:<h264-parser> broken/invalid nal Type: 0 Unknown, Size: 120 will be dropped
    0:00:11.672343169    10      0x23e1b70 WARN               h264parse gsth264parse.c:1197:gst_h264_parse_handle_frame:<h264-parser> input stream is corrupt; it contains a NAL unit of length 0
    0:00:11.672383884    10      0x23e1b70 WARN               h264parse gsth264parse.c:1237:gst_h264_parse_handle_frame:<h264-parser> broken/invalid nal Type: 0 Unknown, Size: 8 will be dropped
    0:00:11.672420039    10      0x23e1b70 WARN               h264parse gsth264parse.c:1197:gst_h264_parse_handle_frame:<h264-parser> input stream is corrupt; it contains a NAL unit of length 0
    0:00:11.672440936    10      0x23e1b70 WARN               h264parse gsth264parse.c:1237:gst_h264_parse_handle_frame:<h264-parser> broken/invalid nal Type: 1 Slice, Size: 21 will be dropped
    0:00:11.672471265    10      0x23e1b70 WARN               h264parse gsth264parse.c:1237:gst_h264_parse_handle_frame:<h264-parser> broken/invalid nal Type: 0 Unknown, Size: 32 will be dropped
    0:00:11.672500842    10      0x23e1b70 WARN               h264parse gsth264parse.c:1197:gst_h264_parse_handle_frame:<h264-parser> input stream is corrupt; it contains a NAL unit of length 1
    0:00:11.672529556    10      0x23e1b70 WARN               h264parse gsth264parse.c:1197:gst_h264_parse_handle_frame:<h264-parser> input stream is corrupt; it contains a NAL unit of length 1
    0:00:11.672558628    10      0x23e1b70 WARN               h264parse gsth264parse.c:1197:gst_h264_parse_handle_frame:<h264-parser> input stream is corrupt; it contains a NAL unit of length 1

The mp4 input file is not corrupted

I finally make it work… Just replaced file_source with uridecodebin and no problems at all

@miguelmndez
Would you mind elaborating a bit more?
If you don’t mind sharing the sample code that worked, it would be greatly appreciated

Thanks,
Jae