Use the function of Deepstream to analyze the picture

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson)
**• DeepStream Version6.0
**• JetPack Version (valid for Jetson only)4.6
**• TensorRT Version8.x

Hi!
deepstream_python_apps many examples of video stream analysis are provided here, but I want to analyze pictures. I input a picture and get the analysis results. The following is the pipeline I wrote, but it always reports errors. How can I modify it?

    GObject.threads_init()
    Gst.init(None)

    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()
    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")
    
    print("Creating Source \n ")
    source = Gst.ElementFactory.make("filesrc", "file-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")
    source.set_property('location', "./1.jpg")
    pipeline.add(source)

    decoder = Gst.ElementFactory.make("jpegdec", "vorbis-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create Source \n")

    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")
    pgie.set_property('config-file-path', "config_infer_primary_peoplenet.txt")
    pipeline.add(pgie)

    print("Creating filter1 \n ")
    caps1 = Gst.Caps.from_string("image/jpeg,,framerate=(fraction)30/1")
    filter = Gst.ElementFactory.make("capsfilter", "filter")
    if not filter:
        sys.stderr.write(" Unable to get the caps filter \n")
    filter.set_property("caps", caps1)
    pipeline.add(filter)

    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")
    pipeline.add(nvvidconv)

    # Create OSD to draw on the converted RGBA buffer
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")
    pipeline.add(nvosd)

    sink = Gst.ElementFactory.make("fakesink", "nvvideo-renderer")
    sink.set_property('enable-last-sample', False)
    pipeline.add(sink)

    print("Linking elements in the Pipeline \n")
    source.link(decoder)
    decoder.link(pgie)
    pgie.link(nvvidconv)
    nvvidconv.link(nvosd)
    nvosd.link(sink)

    # create an event loop and feed gstreamer bus mesages to it
    loop = GObject.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", bus_call, loop)

    # start play back and listen to events
    print("Starting pipeline \n")
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    # cleanup
    pipeline.set_state(Gst.State.NULL)

How much have you known about gstreamer, gst-python before you start with deepstream and pyds?

Hello! I came into contact with these in the official Python examples.

so…you want infer a image?
you can see this folder.
/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-image-decode-test

There is SW jpeg decoder in gstreamer jpegdec (gstreamer.freedesktop.org).

DeepStream provides HW jpeg decoder Gst-nvjpegdec — DeepStream 6.0 Release documentation

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.