Inference on RTSP Stream

I’ve been trying to run Python DS sample 1 on a rtsp stream. However i received an error like:

Error: gst-stream-error-quark: Internal data stream error. (1): gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstRTSPSrc:rtsp-source/GstUDPSrc:udpsrc1:
streaming stopped, reason not-linked (-1)
# Standard GStreamer initialization
    GObject.threads_init()
    Gst.init(None)

    # Create gstreamer elements
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    # Source element for reading from the file
    print("Creating Source \n ")
    source = Gst.ElementFactory.make("rtspsrc", "rtsp-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")
    
    depay = Gst.ElementFactory.make('rtph264depay', "depay")
    if not depay:
        sys.stderr.write(" Unable to create depayer \n")

    # Since the data format in the input file is elementary h264 stream,
    # we need a h264parser
    print("Creating H264Parser \n")
    h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    if not h264parser:
        sys.stderr.write(" Unable to create h264 parser \n")

    # Use nvdec_h264 for hardware accelerated decode on GPU
    print("Creating Decoder \n")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    # Use nvinfer to run inferencing on decoder's output,
    # behaviour of inferencing is set through config file
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")

    # Use convertor to convert from NV12 to RGBA as required by nvosd
    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    # Create OSD to draw on the converted RGBA buffer
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")

    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    # Finally render the osd output
    if is_aarch64():
        transform = Gst.ElementFactory.make("nvegltransform", "nvegl-transform")

    print("Creating EGLSink \n")
    sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
    if not sink:
        sys.stderr.write(" Unable to create egl sink \n")

    print("Playing file %s " %args[1])
    source.set_property('location', "rtsp://admin:ayv3644@192.168.1.244:554/live")
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', 1)
    streammux.set_property('batched-push-timeout', 4000000)
    pgie.set_property('config-file-path', "dstest1_pgie_config.txt")

    print("Adding elements to Pipeline \n")
    pipeline.add(source)
    pipeline.add(depay)
    pipeline.add(h264parser)
    pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(pgie)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(sink)
    if is_aarch64():
        pipeline.add(transform)

    # we link the elements together
    # file-source -> h264-parser -> nvh264-decoder ->
    # nvinfer -> nvvidconv -> nvosd -> video-renderer
    print("Linking elements in the Pipeline \n")
    source.link(depay)
    depay.link(h264parser)
    h264parser.link(decoder)

    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
    srcpad = decoder.get_static_pad("src")
    if not srcpad:
        sys.stderr.write(" Unable to get source pad of decoder \n")
    srcpad.link(sinkpad)
    streammux.link(pgie)
    pgie.link(nvvidconv)
    nvvidconv.link(nvosd)
    if is_aarch64():
        nvosd.link(transform)
        transform.link(sink)
    else:
        nvosd.link(sink)

My pipeline creation is like above. I cant get stream to my inference engine from my IP camera, which i was able to do by using Opencv+GST before w/

gst_pipeline = ('rtspsrc location=rtsp://id:passwd@ip:port/live ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert!  appsink')

rtspsrc has “sometimes” pads so you can’t link in the usual way. You need to add a callback for the “pad-added” signal, and link the pads manually when the src pad is created. You may need to check the capabilities (caps) as well so as not to try to link an audio pad to a video sink.

Fwiw, I ended up using uridecodebin in my pipeline, since it handles the depay and decoder, however a callback to connect the pad(s) from uridecodebin to the rest of the pipeline is still required.

Also, you can check the return code of Gst.Element.link() (some_element.link()) to see what doesn’t link. Connecting the pads manually can give you even more debug info with it’s return.

Your callback to handle the pad-added signal could look something like:

def on_pad_added(source: Gst.Element, src_pad: Gst.Pad, sink_pad: Gst.Pad):
    # you might want to <a target='_blank' rel='noopener noreferrer' href='https://lazka.github.io/pgi-docs/Gst-1.0/classes/Pad.html#Gst.Pad.can_link'>check if caps are compatible</a> here.
    ret = <a target='_blank' rel='noopener noreferrer' href='https://lazka.github.io/pgi-docs/Gst-1.0/classes/Pad.html#Gst.Pad.link'>src_pad.link(sink_pad)</a>  # type: <a target='_blank' rel='noopener noreferrer' href='https://lazka.github.io/pgi-docs/Gst-1.0/enums.html#Gst.PadLinkReturn'>Gst.PadLinkReturn</a>
    if ret != <a target='_blank' rel='noopener noreferrer' href='https://lazka.github.io/pgi-docs/Gst-1.0/enums.html#Gst.PadLinkReturn'>Gst.PadLinkReturn</a>.OK:
        raise SomeError(f"pad could not link becuase {ret.value_name}")
        # (or handle the error however you choose)

I put hyperlinks in that above code to the documentation if you hover. Also, the type hints (…source: Gst.Element…) are optional and ignored by Python at runtime, but will help your IDE give you proper code completion.

You connect that callback to an element by doing something like:

sink_pad = sink_element.get_static_pad('sink')  # or a request pad
source_element.connect('pad-added', on_pad_added, sink_pad)

That last element, sink_pad, can actually be anything you need (the element you want to link to, if you need to request a pad, or the whole parent bin if you wanted), and it will be passed to the callback as the third argument.

Since pads don’t always exist on every element, you may need to be creative so that the pad you are linking to exists and is able to be linked to. This is unfortanately the dance that must be performed with ‘sometimes’ pads, since a rtsp source for example, has no idea whether it’s going to have an audio stream when it’s initially created.

Thanks for your reply, will try it asap. Besides that do you know a good source to get an intuitive knowledge about gst python or whole gst concept(like core terminology, operations, etc.)?

For Python?

I reference this, but the general terminology and the rest you can probably just learn from the C version of the tutorials. I followed the C tutorials before I ever touched the Python bindings for gstreamer. Even if you don’t know C, you can retype the examples and by the time you’re done, you’ll at least know a little. The basic concepts (Elements, Bins, Pad types, Bus, MainLoop, callbacks, etc…) are all the same. The Python is just a thin wrapper around the C anyway.

https://gstreamer.freedesktop.org/documentation/tutorials/index.html