Building Real-Time Video AI Applications ERROR LOG

pgie file config

Code executed:

4.2

#Import necessary libraries
import sys
import gi
gi.require_version(‘Gst’, ‘1.0’)
from gi.repository import GObject, Gst, GLib
from common.bus_call import bus_call
import pyds

Set up logging configuration

import logging
logging.basicConfig(level=logging.DEBUG) # Set desired logging level

def run(input_file_path):

global inference_output
inference_output=[]
Gst.init(None)

# Create element that will form a pipeline
print("Creating Pipeline")
logging.info("Starting pipeline")
pipeline=Gst.Pipeline()

source=Gst.ElementFactory.make("filesrc", "file-source")
source.set_property('location', input_file_path)
# data/assessment_stream.mp4
h264parser=Gst.ElementFactory.make("h264parse", "h264-parser")
decoder=Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")

streammux=Gst.ElementFactory.make("nvstreammux", "Stream-muxer")    
streammux.set_property('width', 1280)
streammux.set_property('height', 720)
streammux.set_property('batch-size', 1)

pgie=Gst.ElementFactory.make("nvinfer", "primary-inference")
if not pgie:
    sys.stderr.write(" Unable to create pgie \n")
    
# Construct the absolute path dynamically
base_path = '/dli/task/'  # Base directory containing the config file
config_file_name = 'spec_files/pgie_config_dashcamnet.txt'  # Config file name and path relative to the base directory
absolute_path = os.path.join(base_path, config_file_name)

# Set the absolute path for the config file
pgie.set_property('config-file-path', absolute_path)
# pgie.set_property('config-file-path', os.environ['SPEC_FILE'])

nvvidconv1=Gst.ElementFactory.make("nvvideoconvert", "convertor")
nvosd=Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
nvvidconv2=Gst.ElementFactory.make("nvvideoconvert", "convertor2")
capsfilter=Gst.ElementFactory.make("capsfilter", "capsfilter")
caps=Gst.Caps.from_string("video/x-raw, format=I420")
capsfilter.set_property("caps", caps)

encoder=Gst.ElementFactory.make("avenc_mpeg4", "encoder")
encoder.set_property("bitrate", 2000000)

sink=Gst.ElementFactory.make("filesink", 'filesink')
#     "filesink"  "nveglglessink"
sink.set_property('location', 'output.mpeg4')
sink.set_property("sync", 1)

# Add the elements to the pipeline
print("Adding elements to Pipeline")
pipeline.add(source)
pipeline.add(h264parser)
pipeline.add(decoder)
pipeline.add(streammux)
pipeline.add(pgie)
pipeline.add(nvvidconv1)
pipeline.add(nvosd)
pipeline.add(nvvidconv2)
pipeline.add(capsfilter)
pipeline.add(encoder)
pipeline.add(sink)

# Link the elements together
print("Linking elements in the Pipeline")
source.link(h264parser)
h264parser.link(decoder)
decoder.get_static_pad('src').link(streammux.get_request_pad("sink_0"))
streammux.link(pgie)
pgie.link(nvvidconv1)
nvvidconv1.link(nvosd)
nvosd.link(nvvidconv2)
nvvidconv2.link(capsfilter)
capsfilter.link(encoder)
encoder.link(sink)

# Attach probe to OSD sink pad
osdsinkpad=nvosd.get_static_pad("sink")
osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

# Create an event loop and feed gstreamer bus mesages to it
loop=GLib.MainLoop()
bus=pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)

# Start play back and listen to events
print("Starting pipeline")

try:
    logging.info("Starting pipeline")
    pipeline.set_state(Gst.State.PLAYING)
    logging.debug("Pipeline state set to PLAYING")

    loop = GLib.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", bus_call, loop)

    logging.debug("Starting loop.run()")
    loop.run()
    logging.debug("loop.run() finished")

except Exception as e:
    logging.error(f"Exception occurred: {e}")
    pass

logging.info("Pipeline stopping...")
pipeline.set_state(Gst.State.NULL)
logging.info("Pipeline stopped")

return inference_output

if name == “main”:
logging.info(“Starting script execution”)
run(‘path_to_your_input_file’) # Provide your input file path here
logging.info(“Script execution finished”)

Error code received:
INFO:root:Starting script execution INFO:root:Starting pipeline INFO:root:Starting pipeline DEBUG:root:Pipeline state set to PLAYING DEBUG:root:Starting loop.run() ERROR:root:Error: gst-library-error-quark: Configuration file parsing failed (5): gstnvinfer.cpp(794): gst_nvinfer_start (): /GstPipeline:pipeline3/GstNvInfer:primary-inference: Config file path: /dli/task/spec_files/pgie_config_dashcamnet.txt ERROR:root:Error: gst-library-error-quark: Configuration file parsing failed (5): gstnvinfer.cpp(794): gst_nvinfer_start (): /GstPipeline:pipeline3/GstNvInfer:primary-inference: Config file path: /dli/task/spec_files/pgie_config_dashcamnet.txt DEBUG:root:loop.run() finished INFO:root:Pipeline stopping… INFO:root:Pipeline stopped INFO:root:Script execution finished

Creating Pipeline Adding elements to Pipeline Linking elements in the Pipeline Starting pipeline

Please provide complete information as applicable to your setup. Thanks
Hardware Platform (Jetson / GPU)
DeepStream Version
JetPack Version (valid for Jetson only)
TensorRT Version
NVIDIA GPU Driver Version (valid for GPU only)
Issue Type( questions, new requirements, bugs)
How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I am trying to solve one of the courses that are listed at Nvidia. Although I seem to be running into an issue that might be related to how the gstnvinfer.cpp is working. I believe there might a problem with the absolute path. I was reading another forum post and a moderator said that this was a problem with the code. They were going to talk about it internally but never came back with a solution. I am working though the Building Real-Time Video AI applications course.

This is the forum post I am referring to: https://forums.developer.nvidia.com/t/model-engine-error-deepstream-test1-python-bindings/155035/5

We support both absolute and relative paths. The key is whether your path is correct. Could you describe your operating environment in detail? Or could you attach the link of the Building Real-Time Video AI applications?

Here is the course link: https://courses.nvidia.com/courses/course-v1:DLI+S-IV-01+V1/

The environment was a jupyter notebook setup already. I was basically downloading the dashcamnet model and then changing some configurations in the pgie file. Although when trying to run the pipeline, it seemed to be having difficulties with the file path. This was the absolute path /dli/task/spec_files/pgie_config_dashcamnet.txt.

Configuration file parsing failed (5): gstnvinfer.cpp(794): gst_nvinfer_start (): /GstPipeline:pipeline3/GstNvInfer:primary-inference: Config file path: /dli/task/spec_files/pgie_config_dashcamnet.txt

You need to double check that the path is correct in the environment in which you are actually running. I tried that with our demo: deepstream-test1.
I modify the config file to the follow:

tlt-encoded-model=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt
model-engine-file=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
labelfile-path=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/labels.txt
int8-calib-file=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/cal_trt.bin

It works well.

This is the error I keep getting over and over again:

Creating Pipeline
Adding elements to Pipeline
Linking elements in the Pipeline
Starting pipeline
Error: gst-library-error-quark: Configuration file parsing failed (5): gstnvinfer.cpp(794): gst_nvinfer_start (): /GstPipeline:pipeline5/GstNvInfer:primary-inference:
Config file path: spec_files/pgie_config_dashcamnet.txt

There seems to be something wrong with the nvinfer, but not entirely sure what it is.

# 4.2
#Import necessary libraries
import sys
import gi
gi.require_version('Gst', '1.0')
from gi.repository import GObject, Gst, GLib
from common.bus_call import bus_call
import pyds

def run(input_file_path):
    global inference_output
    inference_output=[]
    Gst.init(None)

    # Create element that will form a pipeline
    print("Creating Pipeline")
    pipeline=Gst.Pipeline()
    
    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")
    
    source=Gst.ElementFactory.make("filesrc", "file-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")
    source.set_property('location', "data/assessment_stream.h264")
    h264parser=Gst.ElementFactory.make("h264parse", "h264-parser")
    decoder=Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    
    streammux=Gst.ElementFactory.make("nvstreammux", "Stream-muxer")    
    streammux.set_property('width', 1280)
    streammux.set_property('height', 720)
    streammux.set_property('batch-size', 1)
    
    pgie=Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")
    # pgie.set_property('config-file-path', os.environ['SPEC_FILE'])
    pgie.set_property('config-file-path', 'spec_files/pgie_config_dashcamnet.txt')
    
    nvvidconv1=Gst.ElementFactory.make("nvvideoconvert", "convertor")
    nvosd=Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
    nvvidconv2=Gst.ElementFactory.make("nvvideoconvert", "convertor2")
    capsfilter=Gst.ElementFactory.make("capsfilter", "capsfilter")
    caps=Gst.Caps.from_string("video/x-raw, format=I420")
    capsfilter.set_property("caps", caps)
    
    encoder=Gst.ElementFactory.make("avenc_mpeg4", "encoder")
    encoder.set_property("bitrate", 2000000)
    
    sink=Gst.ElementFactory.make("filesink", 'filesink')
    sink.set_property('location', 'output.mpeg4')
    sink.set_property("sync", 1)
    
    # Add the elements to the pipeline
    print("Adding elements to Pipeline")
    pipeline.add(source)
    pipeline.add(h264parser)
    pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(pgie)
    pipeline.add(nvvidconv1)
    pipeline.add(nvosd)
    pipeline.add(nvvidconv2)
    pipeline.add(capsfilter)
    pipeline.add(encoder)
    pipeline.add(sink)

    # Link the elements together
    print("Linking elements in the Pipeline")
    source.link(h264parser)
    h264parser.link(decoder)
    decoder.get_static_pad('src').link(streammux.get_request_pad("sink_0"))
    streammux.link(pgie)
    pgie.link(nvvidconv1)
    nvvidconv1.link(nvosd)
    nvosd.link(nvvidconv2)
    nvvidconv2.link(capsfilter)
    capsfilter.link(encoder)
    encoder.link(sink)
    
    # Attach probe to OSD sink pad
    osdsinkpad=nvosd.get_static_pad("sink")
    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

    # Create an event loop and feed gstreamer bus mesages to it
    loop=GLib.MainLoop()
    bus=pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", bus_call, loop)
    
    # Start play back and listen to events
    print("Starting pipeline")
    
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    
    pipeline.set_state(Gst.State.NULL)
    return inference_output

It has to be how I am creating the pipeline and linking it together. Not sure why this is not working as I have followed the demos and even tried new file paths.

This is the pipeline picture that I am supposed to be using:

As I attached earlier, it is possible to configure absolute or relative paths using our python demo. It could be your jupyter notebook environment issue. Could you move the source code and the config file to the same path and try that?

pgie.set_property('config-file-path', 'pgie_config_dashcamnet.txt')

And could you provide complete information as applicable to your setup as I attached before?
Complete Information

I’ve linked my environment below because it has already been setup for me each time I launch a task.

http://dli-604a4aa51b37-dd3932.aws.labs.courses.nvidia.com/lab/lab/tree/assessment.ipynb

Notebook.zip (50.1 KB)

I’ve linked the notebook incase the environment closes.

I cannot open the link you attached. Can you confirm the real path of this configuration file on your server? You need to config the real path on your server when you run the python demo.

Is it possible to open the notebook. It contains the jupyter notebook that I am using for the assignment.

Could you check the real path of this configuration file on your server?

The real path starts with this /dli/task/ so this is why I am confused why it is not working. Especially because my config file was checked by nvidia and its was 100% correct. Meaning the problems resides elsewhere unless the testing configuration setup to test the config file is incorrect.

From the log attached, this is a path problem. Can you do the following test? Use ssh to connect to your host to find out what the actual path is. Or could you just run our demo app directly deepstream-test1?

I’m sorry could you give me some steps on how to ssh into the jupyter notebook. From the course I click start lab and it generates the jupyter notebook for me running on a remote host. Then from there I have a link.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

I mean the real config file path on the remote host. You can use some ssh tools such as Putty to log in the remote host. And check the real config file path on the remote host.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.