Building Real-Time Video AI Applications Assesments

Hi, I’m doing this courses Course Detail | NVIDIA

The assesment is just edit in the FIXME part.
Here the error I encountered:

Error: gst-library-error-quark: Configuration file parsing failed (5): gstnvinfer.cpp(794): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: pgie_config_dashcamnet.txt

My whole code:

4.2

#Import necessary libraries
import sys
import gi
gi.require_version(‘Gst’, ‘1.0’)
from gi.repository import GObject, Gst, GLib
from common.bus_call import bus_call
import pyds

def run(input_file_path):
global inference_output
inference_output=
Gst.init(None)

# Create element that will form a pipeline
print("Creating Pipeline")
pipeline=Gst.Pipeline()

# source=Gst.ElementFactory.make(<<<<FIXME>>>>, "file-source") 
# source.set_property('location', <<<<FIXME>>>>)
# h264parser=Gst.ElementFactory.make(<<<<FIXME>>>>, "h264-parser")
source = Gst.ElementFactory.make("filesrc", "file-source")
source.set_property('location', input_file_path)
h264parser=Gst.ElementFactory.make("h264parse", "h264-parser")
decoder=Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")

# streammux=Gst.ElementFactory.make(<<<<FIXME>>>>, "Stream-muxer")    
# streammux.set_property('width', <<<<FIXME>>>>)
# streammux.set_property('height', <<<<FIXME>>>>)
# streammux.set_property('batch-size', <<<<FIXME>>>>)
streammux=Gst.ElementFactory.make("nvstreammux", "Stream-muxer")    
streammux.set_property('width', 720)
streammux.set_property('height', 1280)
streammux.set_property('batch-size', 1)

# pgie=Gst.ElementFactory.make(<<<<FIXME>>>>, "primary-inference")
pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
pgie.set_property('config-file-path', 'pgie_config_dashcamnet.txt')

nvvidconv1=Gst.ElementFactory.make("nvvideoconvert", "convertor")
# nvosd=Gst.ElementFactory.make(<<<<FIXME>>>>, "onscreendisplay")
nvosd=Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
nvvidconv2=Gst.ElementFactory.make("nvvideoconvert", "convertor2")
capsfilter=Gst.ElementFactory.make("capsfilter", "capsfilter")
caps=Gst.Caps.from_string("video/x-raw, format=I420")
capsfilter.set_property("caps", caps)

encoder=Gst.ElementFactory.make("avenc_mpeg4", "encoder")
encoder.set_property("bitrate", 2000000)

# sink=Gst.ElementFactory.make(<<<<FIXME>>>>, 'filesink')
sink=Gst.ElementFactory.make("filesink", 'filesink')
sink.set_property('location', 'output.mpeg4')
sink.set_property("sync", 1)

# Add the elements to the pipeline
print("Adding elements to Pipeline")
pipeline.add(source)
pipeline.add(h264parser)
pipeline.add(decoder)
pipeline.add(streammux)
pipeline.add(pgie)
# pipeline.add(<<<<FIXME>>>>)
pipeline.add(nvvidconv1)
pipeline.add(nvosd)
pipeline.add(nvvidconv2)
pipeline.add(capsfilter)
pipeline.add(encoder)
pipeline.add(sink)

# Link the elements together
print("Linking elements in the Pipeline")
source.link(h264parser)
h264parser.link(decoder)
decoder.get_static_pad('src').link(streammux.get_request_pad("sink_0"))
# streammux.link(<<<<FIXME>>>>)
# <<<<FIXME>>>>.link(nvvidconv1)
# nvvidconv1.link(<<<<FIXME>>>>)
# <<<<FIXME>>>>.link(nvvidconv2)
streammux.link(nvvidconv1)
nvvidconv1.link(nvvidconv2)
nvvidconv2.link(capsfilter)
capsfilter.link(encoder)
nvvidconv2.link(capsfilter)
capsfilter.link(encoder)
encoder.link(sink)

# Attach probe to OSD sink pad
osdsinkpad=nvosd.get_static_pad("sink")
# <<<<FIXME>>>>.add_probe(Gst.PadProbeType.BUFFER, <<<<FIXME>>>>, 0)
osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

# Create an event loop and feed gstreamer bus mesages to it
loop=GLib.MainLoop()
bus=pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)

# Start play back and listen to events
print("Starting pipeline")

pipeline.set_state(Gst.State.PLAYING)
try:
    loop.run()
except:
    pass

pipeline.set_state(Gst.State.NULL)
return inference_output

By the way, I got 0/20 in the assesment marks, I dont think I get all them incorrect so I dont think the answer is saved.

The “pgie_config_dashcamnet.txt” content is wrong.

I have remove it and put the file where it belong but still got the same error

The log just says the config file is wrong. Please check it.

The better choice is to study Welcome to the DeepStream Documentation — DeepStream documentation 6.4 documentation and run deepstream-test1.

I still dont understand what the error here
“Error: gst-library-error-quark: Configuration file not provided (5): gstnvinfer.cpp(788): gst_nvinfer_start (): /GstPipeline:pipeline5/GstNvInfer:primary-inference”

The gst-nvinfer does not find the configured configuration file.

You set the configuration file here, but the file is not in the same folder of your python app.

There is python samples: NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications (github.com)

Please refer to the samples.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks