Deepstream Gstreamer Pipeline Not Writing Output to any file | NvSegVisual output into a Numpy Array

Hey Folks,

I am trying to use my Gstreamer-Deepstream Pipeline as follow into a Python Wrapper using Gst-Python my command line Pipeline works well for same input stream of local file, but my python Wrapper seems to be loading model and config file But i see nothing saved on the output.mkv , I am using filesink also i dont see any errors can someone suggest me something. Thanks in advance !!!

My goals are

• Use my gst-python wrapper to work on my input video stream and save the output video
• write the intermediate output of the NvSegVisual to a numpy array or any Cv2 dataform for post analysis

Gstreamer Command line pipeline

sudo gst-launch-1.0 filesrc location = 934.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvvideoconvert ! nvdspreprocess config-file= /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvdspreprocess/config_preprocess.txt  ! nvinferbin config-file-path= /opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/nv_seg_tao_unet_config.txt  ! nvsegvisual ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! 'video/x-h264,stream-format=avc' ! matroskamux ! filesink location=gst_out.mkv 

Python Wrapper

import gi
gi.require_version("Gst", "1.0")
from gi.repository import Gst, GObject
import sys
#sys.path.append('../')
import gi
import math

gi.require_version('Gst', '1.0')
from gi.repository import GLib, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
import cv2
#import pyds
#import numpy as np
#import os.path
from os import path

### Could be Ignored sample Def for writing nvsegvisual output to a file 
def seg_src_pad_buffer_probe(pad, info, u_data):
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.NvDsFrameMeta.cast()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break
        frame_number = frame_meta.frame_num
        l_user = frame_meta.frame_user_meta_list
        while l_user is not None:
            try:
                # Note that l_user.data needs a cast to pyds.NvDsUserMeta
                # The casting is done by pyds.NvDsUserMeta.cast()
                # The casting also keeps ownership of the underlying memory
                # in the C code, so the Python garbage collector will leave
                # it alone.
                seg_user_meta = pyds.NvDsUserMeta.cast(l_user.data)
            except StopIteration:
                break
            if seg_user_meta and seg_user_meta.base_meta.meta_type == \
                    pyds.NVDSINFER_SEGMENTATION_META:
                try:
                    # Note that seg_user_meta.user_meta_data needs a cast to
                    # pyds.NvDsInferSegmentationMeta
                    # The casting is done by pyds.NvDsInferSegmentationMeta.cast()
                    # The casting also keeps ownership of the underlying memory
                    # in the C code, so the Python garbage collector will leave
                    # it alone.
                    segmeta = pyds.NvDsInferSegmentationMeta.cast(seg_user_meta.user_meta_data)
                except StopIteration:
                    break
                # Retrieve mask data in the numpy format from segmeta
                # Note that pyds.get_segmentation_masks() expects object of
                # type NvDsInferSegmentationMeta
                masks = pyds.get_segmentation_masks(segmeta)
                masks = np.array(masks, copy=True, order='C')
                # map the obtained masks to colors of 2 classes.
                frame_image = map_mask_as_display_bgr(masks)
                cv2.imwrite(folder_name + "/" + str(frame_number) + ".jpg", frame_image)
            try:
                l_user = l_user.next
            except StopIteration:
                break
        try:
            l_frame = l_frame.next
        except StopIteration:
            break
    return Gst.PadProbeReturn.OK

#### Main Pipeline
def main():
    GObject.threads_init()
    Gst.init(None)

    pipeline = Gst.Pipeline()

    source = Gst.ElementFactory.make("filesrc", "video-source")
    source.set_property("location", "34.mp4")
    pipeline.add(source)
    print('Source loaded')
    print(source)


    parse = Gst.ElementFactory.make("h264parse", "parse")
    pipeline.add(parse)
    print('H264parse loaded')

    decoder = Gst.ElementFactory.make("nvv4l2decoder", "decoder")
    #decoder.set_property("drop-frame-interval", 0)
    pipeline.add(decoder)
    parse.link(decoder)
    print('NVV4l2Decoder loaded')


    mux = Gst.ElementFactory.make("nvstreammux", "mux")
    mux.get_request_pad("m.sink_0")
    mux.set_property("name", 'm')
    mux.set_property("batch-size", 1)   
    mux.set_property("width", 1920)
    mux.set_property("height", 1080) 
    pipeline.add(mux)
    decoder.link(mux)
    print('Streammux loaded')

    convert = Gst.ElementFactory.make("nvvideoconvert", "convert")
    pipeline.add(convert)
    mux.link(convert)
    print('NvvideoConvert loaded')

    nvds = Gst.ElementFactory.make("nvdspreprocess", "preprocess")
    nvds.set_property("config-file", "/opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvdspreprocess/config_preprocess.txt")
    pipeline.add(nvds)
    convert.link(nvds)
    print('NVDSPreprocess loaded')

    infer = Gst.ElementFactory.make("nvinferbin", "infer")
    infer.set_property("config-file-path", "/opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/nv_seg_tao_unet_config.txt")
    pipeline.add(infer)
    nvds.link(infer)
    print('INFERBIN loaded')
    print(infer)

    seg = Gst.ElementFactory.make("nvsegvisual", "seg")
    pipeline.add(seg)
    infer.link(seg)
    print('NvSegVisual loaded')
    print(type(seg))
    
    #convert2 = Gst.ElementFactory.make("nvvideoconvert", "convert")
    #pipeline.add(convert2)
    seg.link(convert)
    print('NvvideoConvert loaded')

    enc = Gst.ElementFactory.make("nvv4l2h264enc", "enc")
    pipeline.add(enc)
    convert.link(enc)
    print('NVV4L2H264ENC loaded')

    enc.link(parse)
    print('H264Parse loaded') 


    caps = Gst.Caps.from_string("video/x-h264,stream-format=avc")
    filter = Gst.ElementFactory.make("capsfilter", "filter")
    filter.set_property("caps", caps)
    pipeline.add(filter)
    parse.link(filter)
    print('FomatConvert loaded') 

    #queue = Gst.ElementFactory.make("queue", "queue")
    #pipeline.add(queue)
    #filter.link(queue)

    mkv = Gst.ElementFactory.make("matroskamux", "mkv")
    pipeline.add(mkv)
    filter.link(mkv)
    print('MatroSkamux loaded') 

    sink = Gst.ElementFactory.make("filesink", "video-sink")
    sink.set_property("location", 'sample_out.mkv')
    #sink.set_property("window-y", 0)
    #sink.set_property("window-width", 1280)
    #sink.set_property("window-height", 720)
    pipeline.add(sink)
    mkv.link(sink)
    print('FileSink loaded')

    

    # create an event loop and feed gstreamer bus mesages to it
    #loop = GLib.MainLoop()
    #bus = pipeline.get_bus()
    #bus.add_signal_watch()
    #bus.connect("message", bus_call, loop)

    
    '''
    # Lets add probe to get informed of the meta data generated, we add probe to
    # the src pad of the inference element
    seg_src_pad = seg.get_static_pad("src")
    if not seg_src_pad:
        sys.stderr.write(" Unable to get src pad \n")
    else:
        seg_src_pad.add_probe(Gst.PadProbeType.BUFFER, seg_src_pad_buffer_probe, 0)
    loop = GObject.MainLoop()
    '''
    
    loop = GObject.MainLoop()
    pipeline.set_state(Gst.State.PLAYING)
    print('END-PipeLine ') 
    '''
    ct=0
    try:
        loop.run()
        ct +=1
        print(ct) 
    except:
        pass
    '''

    pipeline.set_state(Gst.State.NULL)

if __name__ == "__main__":
    main()

Output Log

Source loaded
<__gi__.GstFileSrc object at 0x7f3ce48bceb0 (GstFileSrc at 0x56077e92a320)>
H264parse loaded
NVV4l2Decoder loaded
Streammux loaded
NvvideoConvert loaded
NVDSPreprocess loaded
INFERBIN loaded
<__gi__.GstDsNvInferBin object at 0x7f3ce48c07d0 (GstDsNvInferBin at 0x56077f21e040)>
NvSegVisual loaded
<class '__gi__.GstNvSegVisual'>
NvvideoConvert loaded
NVV4L2H264ENC loaded
H264Parse loaded
FomatConvert loaded
MatroSkamux loaded
FileSink loaded
0:00:17.168951836 10445 0x56077ffe6a40 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/CA_CD.etlt_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x512x512       
1   OUTPUT kFLOAT softmax_1       512x512x3       

0:00:17.202368243 10445 0x56077ffe6a40 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/CA_CD.etlt_b1_gpu0_fp32.engine
0:00:17.215687040 10445 0x56077ffe6a40 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<nvinfer_bin_nvinfer> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/nv_seg_tao_unet_config.txt sucessfully
END-PipeLine 

• Hardware Platform

Tesla T4

• DeepStream Version

deepstream-app version 6.0.1
DeepStreamSDK 6.0.1
CUDA Driver Version: 11.4
CUDA Runtime Version: 11.4
TensorRT Version: 8.4
cuDNN Version: 8.4
libNVWarp360 Version: 2.0.1d3
gst-launch-1.0 version 1.20.3
GStreamer 1.20.3

Hi @nitinp14920914
Could you share the GST log with “GST_DEBUG=”*:6"", e.g.

export GST_DEBUG=“*:6”
gst-launch-1.0 filesrc location = 934.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvvideoconvert ! nvdspreprocess config-file= /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvdspreprocess/config_preprocess.txt ! nvinferbin config-file-path= /opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/nv_seg_tao_unet_config.txt ! nvsegvisual ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! ‘video/x-h264,stream-format=avc’ ! matroskamux ! filesink location=gst_out.mkv

outfile.txt (479.1 KB)
Enclosed the DEBUG output in attached file.

In the log, I see warning log below, could you test with the sample mp4 video (e.g. /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_1080p_h264.mp4) in deepstream SDK as file source ?

WARN e[00m e[00m basesrc gstbasesrc.c:3583:gst_base_src_start_complete:e[00m pad not activated yet

WARN e[00m e[00m qtdemux qtdemux_types.c:233:qtdemux_type_get:e[00m unknown QuickTime node type pasp

WARN e[00m e[00m qtdemux qtdemux.c:3031:qtdemux_parse_trex:e[00m failed to find fragment defaults for stream 1

Hey…

This is my version of Code and looks like problem is with my pipeline in loading frames which gives me this error now :

<__gi__.GstFileSrc object at 0x7ff74ea6aeb0 (GstFileSrc at 0x55e4352aa320)>
<__gi__.GstH264Parse object at 0x7ff74ea6e0a0 (GstH264Parse at 0x55e4352b1f20)>
H264parse loaded
NVV4l2Decoder loaded
Streammux loaded
<__gi__.GstNvStreamMux object at 0x7ff74ea6e3c0 (GstNvStreamMux at 0x55e4353500f0)>
NvvideoConvert loaded
NVDSPreprocess loadeda
INFERBIN loaded
<__gi__.GstDsNvInferBin object at 0x7ff74ea6e7d0 (GstDsNvInferBin at 0x55e435b9d040)>
NvSegVisual loaded
<class '__gi__.GstNvSegVisual'>
NvvideoConvert loaded
NVV4L2H264ENC loaded
H264Parse loaded
FomatConvert loaded
MatroSkamux loaded
FileSink loaded
Starting pipeline 

0:00:01.845113238 19096 0x55e435bbdb20 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/elio_admin/deep/pipeline/CA_CD.etlt_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x512x512       
1   OUTPUT kFLOAT softmax_1       512x512x3       

0:00:01.878874411 19096 0x55e435bbdb20 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/elio_admin/deep/pipeline/CA_CD.etlt_b1_gpu0_fp32.engine
0:00:01.892211482 19096 0x55e435bbdb20 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<nvinfer_bin_nvinfer> [UID 1]: Load new model:nv_seg_tao_unet_config.txt sucessfully
Error: gst-stream-error-quark: No valid frames found before end of stream (5): gstbaseparse.c(3603): gst_base_parse_loop (): /GstPipeline:pipeline0/GstH264Parse:parse
END-PipeLine 

My code looks like

import gi

####sudo python sample_gst.py

gi.require_version("Gst", "1.0")

from gi.repository import Gst, GObject
import sys
#sys.path.append('../')
import gi
import math

gi.require_version('Gst', '1.0')
from gi.repository import GLib, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
import cv2
#import pyds
#import numpy as np
#import os.path
from os import path

def seg_src_pad_buffer_probe(pad, info, u_data):
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.NvDsFrameMeta.cast()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break
        frame_number = frame_meta.frame_num
        l_user = frame_meta.frame_user_meta_list
        while l_user is not None:
            try:
                # Note that l_user.data needs a cast to pyds.NvDsUserMeta
                # The casting is done by pyds.NvDsUserMeta.cast()
                # The casting also keeps ownership of the underlying memory
                # in the C code, so the Python garbage collector will leave
                # it alone.
                seg_user_meta = pyds.NvDsUserMeta.cast(l_user.data)
            except StopIteration:
                break
            if seg_user_meta and seg_user_meta.base_meta.meta_type == \
                    pyds.NVDSINFER_SEGMENTATION_META:
                try:
                    # Note that seg_user_meta.user_meta_data needs a cast to
                    # pyds.NvDsInferSegmentationMeta
                    # The casting is done by pyds.NvDsInferSegmentationMeta.cast()
                    # The casting also keeps ownership of the underlying memory
                    # in the C code, so the Python garbage collector will leave
                    # it alone.
                    segmeta = pyds.NvDsInferSegmentationMeta.cast(seg_user_meta.user_meta_data)
                except StopIteration:
                    break
                # Retrieve mask data in the numpy format from segmeta
                # Note that pyds.get_segmentation_masks() expects object of
                # type NvDsInferSegmentationMeta
                masks = pyds.get_segmentation_masks(segmeta)
                masks = np.array(masks, copy=True, order='C')
                # map the obtained masks to colors of 2 classes.
                frame_image = map_mask_as_display_bgr(masks)
                cv2.imwrite(folder_name + "/" + str(frame_number) + ".jpg", frame_image)
            try:
                l_user = l_user.next
            except StopIteration:
                break
        try:
            l_frame = l_frame.next
        except StopIteration:
            break
    return Gst.PadProbeReturn.OK


def main():

    GObject.threads_init()
    Gst.init(None)
    #Gst.debug_set_active(True)
    #Gst.debug_set_default_threshold(4)
    
    pipeline = Gst.Pipeline()

    source = Gst.ElementFactory.make("filesrc", "video-source")
    source.set_property("location", "934.mp4")#'/opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_1080p_h264.mp4'
    pipeline.add(source)
    print('Source loaded')
    print(source)


    parse = Gst.ElementFactory.make("h264parse", "parse")
    pipeline.add(parse)
    source.link(parse)
    print(parse)
    print('H264parse loaded')

    decoder = Gst.ElementFactory.make("nvv4l2decoder", "decoder")
    #decoder.set_property("drop-frame-interval", 0)
    pipeline.add(decoder)
    parse.link(decoder)
    print('NVV4l2Decoder loaded')


    mux = Gst.ElementFactory.make("nvstreammux", "mux")
    mux.get_request_pad("m.sink_0")
    mux.set_property("name", 'm')
    mux.set_property("batch-size", 1)   
    mux.set_property("width", 1920)
    mux.set_property("height", 1080) 
    pipeline.add(mux)
    decoder.link(mux)
    print('Streammux loaded')
    print(mux)



    
    convert = Gst.ElementFactory.make("nvvideoconvert", "convert")
    pipeline.add(convert)
    mux.link(convert)
    print('NvvideoConvert loaded')

    nvds = Gst.ElementFactory.make("nvdspreprocess", "preprocess")
    nvds.set_property("config-file", "/opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvdspreprocess/config_preprocess.txt")
    pipeline.add(nvds)
    convert.link(nvds)
    print('NVDSPreprocess loadeda')

    infer = Gst.ElementFactory.make("nvinferbin", "infer")
    infer.set_property("config-file-path","nv_seg_tao_unet_config.txt")
    pipeline.add(infer)
    nvds.link(infer)
    print('INFERBIN loaded')
    print(infer)

    seg = Gst.ElementFactory.make("nvsegvisual", "seg")
    pipeline.add(seg)
    infer.link(seg)
    print('NvSegVisual loaded')
    print(type(seg))
    
    #convert2 = Gst.ElementFactory.make("nvvideoconvert", "convert")
    #pipeline.add(convert2)
    seg.link(convert)
    print('NvvideoConvert loaded')

    enc = Gst.ElementFactory.make("nvv4l2h264enc", "enc")
    pipeline.add(enc)
    convert.link(enc)
    print('NVV4L2H264ENC loaded')

    enc.link(parse)
    print('H264Parse loaded') 


    caps = Gst.Caps.from_string("video/x-h264,stream-format=avc")
    filter = Gst.ElementFactory.make("capsfilter", "filter")
    filter.set_property("caps", caps)
    pipeline.add(filter)
    parse.link(filter)
    print('FomatConvert loaded') 

    #queue = Gst.ElementFactory.make("queue", "queue")
    #pipeline.add(queue)
    #filter.link(queue)

    mkv = Gst.ElementFactory.make("matroskamux", "mkv")
    pipeline.add(mkv)
    filter.link(mkv)
    print('MatroSkamux loaded') 

    sink = Gst.ElementFactory.make("filesink", "video-sink")
    sink.set_property("location", 'sample_out.mkv')
    #sink.set_property("window-y", 0)
    #sink.set_property("window-width", 1280)
    #sink.set_property("window-height", 720)
    pipeline.add(sink)
    mkv.link(sink)
    print('FileSink loaded')

    

    # create an event loop and feed gstreamer bus mesages to it
    loop = GLib.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", bus_call, loop)

    
    
    # Lets add probe to get informed of the meta data generated, we add probe to
    #the src pad of the inference element
    seg_src_pad = seg.get_static_pad("src")
    if not seg_src_pad:
        sys.stderr.write(" Unable to get src pad \n")
    else:
        seg_src_pad.add_probe(Gst.PadProbeType.BUFFER, seg_src_pad_buffer_probe, 0)

    print("Starting pipeline \n")
    # start play back and listed to events
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    # cleanup
    pipeline.set_state(Gst.State.NULL)




    '''
    loop = GObject.MainLoop()
    
    
    loop = GObject.MainLoop()
    pipeline.set_state(Gst.State.PLAYING)
     

    ct=0
    try:
        loop.run()
        ct +=1
        print(ct) 
    except:
        pass
    
    '''

    #pipeline.set_state(Gst.State.NULL)
    print('END-PipeLine ')

if __name__ == "__main__":
    main()

pl. suggest something …

Hey , Here’s the output log using the suggested input stream

outfile.txt (637.1 KB)

please add qtdemux in the python code, it is used to parse mp4 file.

1 Like

Here the one with QtDemux but seems like i am still getting similar Error

import gi

####sudo python sample_gst.py

gi.require_version("Gst", "1.0")

from gi.repository import Gst, GObject
import sys
#sys.path.append('../')
import gi
import math

gi.require_version('Gst', '1.0')
from gi.repository import GLib, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
import cv2
#import pyds
#import numpy as np
#import os.path
from os import path

def seg_src_pad_buffer_probe(pad, info, u_data):
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.NvDsFrameMeta.cast()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break
        frame_number = frame_meta.frame_num
        l_user = frame_meta.frame_user_meta_list
        while l_user is not None:
            try:
                # Note that l_user.data needs a cast to pyds.NvDsUserMeta
                # The casting is done by pyds.NvDsUserMeta.cast()
                # The casting also keeps ownership of the underlying memory
                # in the C code, so the Python garbage collector will leave
                # it alone.
                seg_user_meta = pyds.NvDsUserMeta.cast(l_user.data)
            except StopIteration:
                break
            if seg_user_meta and seg_user_meta.base_meta.meta_type == \
                    pyds.NVDSINFER_SEGMENTATION_META:
                try:
                    # Note that seg_user_meta.user_meta_data needs a cast to
                    # pyds.NvDsInferSegmentationMeta
                    # The casting is done by pyds.NvDsInferSegmentationMeta.cast()
                    # The casting also keeps ownership of the underlying memory
                    # in the C code, so the Python garbage collector will leave
                    # it alone.
                    segmeta = pyds.NvDsInferSegmentationMeta.cast(seg_user_meta.user_meta_data)
                except StopIteration:
                    break
                # Retrieve mask data in the numpy format from segmeta
                # Note that pyds.get_segmentation_masks() expects object of
                # type NvDsInferSegmentationMeta
                masks = pyds.get_segmentation_masks(segmeta)
                masks = np.array(masks, copy=True, order='C')
                # map the obtained masks to colors of 2 classes.
                frame_image = map_mask_as_display_bgr(masks)
                cv2.imwrite(folder_name + "/" + str(frame_number) + ".jpg", frame_image)
            try:
                l_user = l_user.next
            except StopIteration:
                break
        try:
            l_frame = l_frame.next
        except StopIteration:
            break
    return Gst.PadProbeReturn.OK


def main():
    GObject.threads_init()
    Gst.init(None)

    pipeline = Gst.Pipeline()
    source = Gst.ElementFactory.make("filesrc", "video-source")
    source.set_property("location","934.mp4")#'/opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_1080p_h264.mp4' )
    demux = Gst.ElementFactory.make('qtdemux','qtdemux')
    parse = Gst.ElementFactory.make("h264parse", "parse")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "decoder")
    #decoder.set_property("drop-frame-interval", 0)
    mux = Gst.ElementFactory.make("nvstreammux", "mux")
    mux.get_request_pad("m.sink_0")
    mux.set_property("name", 'm')
    mux.set_property("batch-size", 1)   
    mux.set_property("width", 1920)
    mux.set_property("height", 1080)     
    convert = Gst.ElementFactory.make("nvvideoconvert", "convert")
    nvds = Gst.ElementFactory.make("nvdspreprocess", "preprocess")
    nvds.set_property("config-file", "/opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvdspreprocess/config_preprocess.txt")
    infer = Gst.ElementFactory.make("nvinferbin", "infer")
    infer.set_property("config-file-path", "/opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/nv_seg_tao_unet_config.txt")

    seg = Gst.ElementFactory.make("nvsegvisual", "seg")    
    #convert2 = Gst.ElementFactory.make("nvvideoconvert", "convert")
    #pipeline.add(convert2)
    enc = Gst.ElementFactory.make("nvv4l2h264enc", "enc")
    caps = Gst.Caps.from_string("video/x-h264,stream-format=avc")
    filter = Gst.ElementFactory.make("capsfilter", "filter")
    filter.set_property("caps", caps)
    mkv = Gst.ElementFactory.make("matroskamux", "mkv")
    sink = Gst.ElementFactory.make("filesink", "video-sink")
    sink.set_property("location", 'sample_out.mkv')
 

##
    pipeline.add(source)
    print('Source loaded')
    print(source)
    pipeline.add(demux)
    print('QtDemux')   
    pipeline.add(parse)
    print('H264parse loaded')
    pipeline.add(decoder)
    print('NVV4l2Decoder loaded')    
    pipeline.add(mux)
    print('Streammux loaded')
    pipeline.add(convert)
    print('NvvideoConvert loaded')
    pipeline.add(nvds)
    print('NVDSPreprocess loadeda')
    pipeline.add(infer)
    pipeline.add(seg)
    print('NvSegVisual loaded')
    print(type(seg))
    print('NvvideoConvert loaded')
    pipeline.add(enc)
    print('NVV4L2H264ENC loaded')
    print('H264Parse loaded')
    pipeline.add(filter)
    print('FomatConvert loaded')
    pipeline.add(mkv)
    print('MatroSkamux loaded')
    pipeline.add(sink)
    print('FileSink loaded')

    source.link(demux)
    #source.link(parse)
    demux.link(parse)
    parse.link(decoder)


    #sinkpad = mux.get_request_pad("sink_0")
    #if not sinkpad:
    #    sys.stderr.write(" Unable to get the sink pad of streammux \n")
    #srcpad = decoder.get_static_pad("src")
    #if not srcpad:
    #    sys.stderr.write(" Unable to get source pad of decoder \n")
    #srcpad.link(sinkpad)

    decoder.link(mux)
    mux.link(convert)
    convert.link(nvds)
    nvds.link(infer)
    #print(sinkpad)#.link(mux)
    
    #mux.link(infer)

    infer.link(seg)
    print(infer)
    seg.link(convert)
    convert.link(enc)
    enc.link(parse)
    parse.link(filter)
    filter.link(mkv)
    mkv.link(sink)

   

     # create an event loop and feed gstreamer bus mesages to it
    loop = GLib.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", bus_call, loop)

    # Lets add probe to get informed of the meta data generated, we add probe to
    # the src pad of the inference element
    #seg_src_pad = seg.get_static_pad("src")
    #if not seg_src_pad:
    #    sys.stderr.write(" Unable to get src pad \n")
    #else:
    #    seg_src_pad.add_probe(Gst.PadProbeType.BUFFER, seg_src_pad_buffer_probe, 0)

    
    print("Starting pipeline \n")
    # start play back and listed to events
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    # cleanup
    pipeline.set_state(Gst.State.NULL)


    '''
    loop = GObject.MainLoop()
    
    
    loop = GObject.MainLoop()
    pipeline.set_state(Gst.State.PLAYING)
     
    


    ct=0
    try:
        loop.run()
        ct +=1
        print(ct) 
    except:
        pass
    
    

    pipeline.set_state(Gst.State.NULL)
    print('END-PipeLine ')
    '''

if __name__ == "__main__":
    main()

Error

Source loaded
<__gi__.GstFileSrc object at 0x7f6d3ef01eb0 (GstFileSrc at 0x564a89c54320)>
QtDemux
H264parse loaded
NVV4l2Decoder loaded
Streammux loaded
NvvideoConvert loaded
NVDSPreprocess loadeda
NvSegVisual loaded
<class '__gi__.GstNvSegVisual'>
NvvideoConvert loaded
NVV4L2H264ENC loaded
H264Parse loaded
FomatConvert loaded
MatroSkamux loaded
FileSink loaded
<__gi__.GstDsNvInferBin object at 0x7f6d3ef058c0 (GstDsNvInferBin at 0x564a8a551080)>
Starting pipeline 

0:00:02.835462589 29976 0x564a8a572d00 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/CA_CD.etlt_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x512x512       
1   OUTPUT kFLOAT softmax_1       512x512x3       

0:00:02.868366501 29976 0x564a8a572d00 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/CA_CD.etlt_b1_gpu0_fp32.engine
0:00:02.881897416 29976 0x564a8a572d00 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<nvinfer_bin_nvinfer> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/nv_seg_tao_unet_config.txt sucessfully
Error: gst-stream-error-quark: Internal data stream error. (1): qtdemux.c(6073): gst_qtdemux_loop (): /GstPipeline:pipeline0/GstQTDemux:qtdemux:
streaming stopped, reason not-linked (-1)

if you replace “934.mp4” with “/opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_1080p_h264.mp4”, do you see the same error ?

Yes It Does

after test your code, there are some problems:

  1. there is on implementation of mux.get_request_pad(“sink_0”).
  2. there is no h264parser after nvv4l2h264enc.
    please refer to deepstream_python_apps/deepstream_test_1.py at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub, first let the pipeline can run, then add preprocess, nvsegvisual, filesink step by step.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.