Having trouble running inferences on videos. "Error: streaming stopped, reason error (-5)."

• Hardware Platform (Jetson / GPU) Jetson Orin Nano
• DeepStream Version 7.0
**• JetPack Version (valid for Jetson only)**6.0
• TensorRT Version 8.6.2

When I try to run this code with the sample_1080p_h264.mp4 file

import sys
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GLib

def main(args):
    # Initialize GStreamer
    Gst.init(None)

    # Create the pipeline
    pipeline = Gst.Pipeline()

    # Create common elements
    streammux = Gst.ElementFactory.make("nvstreammux", "stream-muxer")
    sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")

    # Check if streammux and sink were created successfully
    if not all([pipeline, streammux, sink]):
        sys.stderr.write("Unable to create streammux or sink.\n")
        return -1

    # Set properties for streammux
    streammux.set_property('width', 1280)
    streammux.set_property('height', 720)
    streammux.set_property('batch-size', len(args) - 1)
    streammux.set_property('batched-push-timeout', 4000000)

    # Add streammux and sink to the pipeline
    pipeline.add(streammux)
    pipeline.add(sink)

    # Create elements for each input video
    for i, video_path in enumerate(args[1:]):
        # Create elements
        source = Gst.ElementFactory.make("filesrc", f"file-source-{i}")
        demuxer = Gst.ElementFactory.make("qtdemux", f"demuxer-{i}")
        parser = Gst.ElementFactory.make("h264parse", f"parser-{i}")
        decoder = Gst.ElementFactory.make("nvv4l2decoder", f"nv-decoder-{i}")
        # Add inference element
        nvinfer = Gst.ElementFactory.make("nvinfer", f"nvinfer-{i}")
        # Add OSD element
        nvdsosd = Gst.ElementFactory.make("nvdsosd", "nv-osd")

        # Check if all elements were created successfully
        if not all([source, demuxer, parser, decoder, nvinfer, nvdsosd]):
            sys.stderr.write(f"Unable to create elements for video {i}.\n")
            return -1

        # Set source property
        source.set_property('location', video_path)

        # Set properties for the inference element
        nvinfer.set_property('config-file-path', 'model_config.txt')  # Set your model config file path

        # Add elements to the pipeline
        pipeline.add(source)
        pipeline.add(demuxer)
        pipeline.add(parser)
        pipeline.add(decoder)
        pipeline.add(nvinfer)
        pipeline.add(nvdsosd)

        # Link elements
        source.link(demuxer)
        demuxer.connect("pad-added", on_pad_added, parser)
        parser.link(decoder)
        decoder.link(nvinfer)
        nvinfer.link(nvdsosd)

        # Link nvdsosd to streammux
        sinkpad = streammux.get_request_pad(f"sink_{i}")
        srcpad = nvdsosd.get_static_pad("src")
        srcpad.link(sinkpad)

    # Link streammux to sink
    streammux.link(sink)

    # Start playing
    ret = pipeline.set_state(Gst.State.PLAYING)
    if ret == Gst.StateChangeReturn.FAILURE:
        sys.stderr.write("Unable to set the pipeline to the playing state.\n")
        return -1

    # Wait until error or EOS
    bus = pipeline.get_bus()
    msg = bus.timed_pop_filtered(
        Gst.CLOCK_TIME_NONE,
        Gst.MessageType.ERROR | Gst.MessageType.EOS
    )

    # Free resources
    pipeline.set_state(Gst.State.NULL)

def on_pad_added(src, new_pad, data):
    print(f"Received new pad {new_pad.get_name()} from {src.get_name()}")
    
    if new_pad.get_current_caps().get_structure(0).get_name().startswith("video/"):
        sink_pad = data.get_static_pad("sink")
        if not sink_pad.is_linked():
            new_pad.link(sink_pad)

if __name__ == '__main__':
    if len(sys.argv) < 2:
        sys.stderr.write("Usage: python3 script_name.py <video1> <video2> ...\n")
        sys.exit(1)
    sys.exit(main(sys.argv))

I get the following error

I am using he same model that deepstream-test1 uses.
Also, when I try to run deepstream-test1, I get this error.

I can run the videos and rtsp streams just fine using deepstream. This problem only happens when trying to run inferences on them.

@daniel.gayer I would suggest you to try using the uri-decode-bin plugin instead of chaining plugins manually. A pipeline which you can test can be : uridecodebin → streammux → nvinfer → demux → queue → nvvideoconvert → nvosd → eglsink

Also, your pipeline linking seems incorrect. you are trying to link source to demuxer although you have a streammux defined.

1 Like

Is this still an DeepStream issue to support? Thanks!
please refer to this pipeline, which is similar to your pipeline.

gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams//sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1920 height=1080  ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.yml ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvdsosd ! fakesink
1 Like

Thanks! it seems to have solved it but now, I am having another one. It seems to be related to my video source though, I will do some tests here before I ask in the forum again.

Thanks! It worked perfectly! I just need to transform these commands into python code now. Would you have a gst launch example that uses a rtsp stream as an input and outputs another rtsp stream with the inferences?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.