Modify the input format of deepstream_python_apps

Hi!
I successfully ran the program of example in deepstream_python_apps.I found that each sample program is an input video stream. So, can you give me a sample code for inputting pictures?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

This just need to change the src with other src pluging, e.g. filesrc , multifilesrc , it’s common GStreamer issue, please feel free to change it by yourself.

Hi!
Sorry, I’ve just touched these configurations. So, can you modify the pipeline of example Test1 and give the specific code? Thank you very much.

Hi!
This is my own pipeline. The main function is to input a picture for analysis, but it always reports errors. How can I modify it? thank!

    GObject.threads_init()
    Gst.init(None)

    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()
    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")
    
    print("Creating Source \n ")
    source = Gst.ElementFactory.make("filesrc", "file-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")
    source.set_property('location', "./1.jpg")
    pipeline.add(source)

    decoder = Gst.ElementFactory.make("jpegdec", "vorbis-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create Source \n")

    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")
    pgie.set_property('config-file-path', "config_infer_primary_peoplenet.txt")
    pipeline.add(pgie)

    print("Creating filter1 \n ")
    caps1 = Gst.Caps.from_string("image/jpeg,,framerate=(fraction)30/1")
    filter = Gst.ElementFactory.make("capsfilter", "filter")
    if not filter:
        sys.stderr.write(" Unable to get the caps filter \n")
    filter.set_property("caps", caps1)
    pipeline.add(filter)

    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")
    pipeline.add(nvvidconv)

    # Create OSD to draw on the converted RGBA buffer
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")
    pipeline.add(nvosd)

    sink = Gst.ElementFactory.make("fakesink", "nvvideo-renderer")
    sink.set_property('enable-last-sample', False)
    pipeline.add(sink)

    print("Linking elements in the Pipeline \n")
    source.link(decoder)
    decoder.link(pgie)
    pgie.link(nvvidconv)
    nvvidconv.link(nvosd)
    nvosd.link(sink)

    # create an event loop and feed gstreamer bus mesages to it
    loop = GObject.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", bus_call, loop)

    # start play back and listen to events
    print("Starting pipeline \n")
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    # cleanup
    pipeline.set_state(Gst.State.NULL)