Building Real-Time Video AI Applications assessment

Please provide complete information as applicable to your setup.

• Hardware Platform i am following course provided by nvidia( Building Real-Time Video AI Applications) which provides a jupyter notebook
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type error
i have exactly same error like the guy in this post have, although it is solved but i am not sure how he solved it
post link - Building Real-Time Video AI Applications Assessment
please help me

Please provide complete information as applicable to your setup. Thanks
Hardware Platform (Jetson / GPU)
DeepStream Version
JetPack Version (valid for Jetson only)
TensorRT Version
NVIDIA GPU Driver Version (valid for GPU only)
Issue Type( questions, new requirements, bugs)
How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Please attach the logs with GST_DEBUG=3 and the whole pipeline of your sample.

please wait, i am providing you the required information.i also wanted to ask one thing
could you please extend my lab time.

i am executing all of these commands on jupyter notebook based on a cloud system provided in the nvidia course
GPU - tesla T4
deepstream version - 6.0.0
TensorRT Version: 8.0.3
NVIDIA GPU Driver Version: 470.82.01

issue
i am working on my final coding assessment, which require me to fix me the config file and rest of the files. which i think i did but when i executed gave me this error
ERROR
Creating Pipeline
Adding elements to Pipeline
Linking elements in the Pipeline
Starting pipeline
Error: gst-library-error-quark: Configuration file parsing failed (5): gstnvinfer.cpp(794): gst_nvinfer_start (): /GstPipeline:pipeline5/GstNvInfer:primary-inference:
Config file path: /dli/task/spec_files/pgie_config_dashcamnet.txt
i am not sure how one can reproduce this(i will provide all the code files)
GST_DEBUG = 3 OUTPUT
i hope you meant this
i run it like this
Gst_Debug = 3 python assessment.ipynb
output
traceback (most recent call last):
file “assessment.ipynb”, line 187, in
“scrolled”: true,
NameError: name ‘true’ is not defined
files
config file
[property]
gpu-id=0
net-scale-factor=1/255.0
tlt-model-key=tlt_encode
tlt-encoded-model=/dli/task/ngc_assets/dashcamnet_vpruned_v1.0/resnet18_dashcamnet_pruned.etlt
labelfile-path=/dli/task/ngc_assets/dashcamnet_vpruned_v1.0/labels.txt
int8-calib-file=/dli/task/ngc_assets/dashcamnet_vpruned_v1.0/dashcamnet_int8.txt
input-dims=3;720;1280;0
uff-input-blob-name=input_1
batch-size=1
process-mode=1
model-color-format=1network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
cluster-mode=0

the note book code

# 4.2
#Import necessary libraries
import sys
import gi
gi.require_version('Gst', '1.0')
from gi.repository import GObject, Gst, GLib
from common.bus_call import bus_call
import pyds

def run(input_file_path):
    global inference_output
    inference_output=[]
    Gst.init(None)

    # Create element that will form a pipeline
    print("Creating Pipeline")
    pipeline=Gst.Pipeline()
    
    source = Gst.ElementFactory.make("filesrc", "file-source")
    source.set_property('location', input_file_path)
    h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")

    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    streammux.set_property('width', 1280)
    streammux.set_property('height', 720)
    streammux.set_property('batch-size', 1)
    pgie=Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")
    pgie=Gst.ElementFactory.make("nvinfer", "primary-inference")
    pgie.set_property('config-file-path', 'pgie_config_dashcamnet.txt')    
    nvvidconv1=Gst.ElementFactory.make("nvvideoconvert", "convertor")
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
    nvvidconv2 = Gst.ElementFactory.make("nvvideoconvert", "convertor2")
    capsfilter = Gst.ElementFactory.make("capsfilter", "caps-filter")
    caps = Gst.Caps.from_string("video/x-raw, format=I420")
    capsfilter.set_property("caps", caps)
    
    encoder=Gst.ElementFactory.make("avenc_mpeg4", "encoder")
    encoder.set_property("bitrate", 2000000)
    
    sink = Gst.ElementFactory.make('filesink', 'file-sink')
    sink.set_property('location', 'output.mpeg4')
    sink.set_property("sync", 1)
    
    # Add the elements to the pipeline
    print("Adding elements to Pipeline")
    pipeline.add(source)
    pipeline.add(h264parser)
    pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(pgie)
    pipeline.add(nvvidconv1)
    pipeline.add(nvosd)
    pipeline.add(nvvidconv2)
    pipeline.add(capsfilter)
    pipeline.add(encoder)
    pipeline.add(sink)

    # Link the elements together
    print("Linking elements in the Pipeline")
    source.link(h264parser)
    h264parser.link(decoder)
    decoder.get_static_pad('src').link(streammux.get_request_pad("sink_0"))
    streammux.link(pgie)
    pgie.link(nvvidconv1)
    nvvidconv1.link(nvosd)
    nvosd.link(nvvidconv2)
    nvvidconv2.link(capsfilter)
    capsfilter.link(encoder)
    encoder.link(sink)
    
    # Attach probe to OSD sink pad
    osdsinkpad = nvosd.get_static_pad("sink")
    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)


    # Create an event loop and feed gstreamer bus mesages to it
    loop=GLib.MainLoop()
    bus=pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", bus_call, loop)
    
    # Start play back and listen to events
    print("Starting pipeline")
    
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    
    pipeline.set_state(Gst.State.NULL)
    return inference_output
# 4.3
# Define the Probe Function
def osd_sink_pad_buffer_probe(pad, info, u_data):
    gst_buffer=info.get_buffer()

    # Retrieve batch metadata from the gst_buffer
    batch_meta=pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame=batch_meta.frame_meta_list
    while l_frame is not None:
        
        # Initially set the tailgate indicator to False for each frame
        tailgate=False
        try:
            frame_meta=pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break
        frame_number=frame_meta.frame_num
        l_obj=frame_meta.obj_meta_list
        
        # Iterate through each object to check its dimension
        while l_obj is not None:
            try:
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
                
                # If the object meet the criteria then set tailgate indicator to True
                obj_bottom=obj_meta.rect_params.top+obj_meta.rect_params.height
                if (obj_meta.rect_params.width > FRAME_WIDTH * 0.3) and (obj_bottom > FRAME_HEIGHT * 0.9): 
                    tailgate=True
                    
            except StopIteration:
                break
            try: 
                l_obj=l_obj.next
            except StopIteration:
                break
        
        print(f'Analyzing frame {frame_number}', end='\r')
        inference_output.append(str(int(tailgate)))
        try:
            l_frame=l_frame.next
        except StopIteration:
            break
    return Gst.PadProbeReturn.OK
# 4.4
tailgate_log=run(input_file_path='/dli/task/data/assessment_stream.h264')

# DO NOT CHANGE BELOW
with open('/dli/task/my_assessment/answer_4.txt', 'w') as f: 
    f.write('\n'.join(tailgate_log))
  1. DeepStream 6.0.0 version is too old. We recommend that you install our latest version 7.1. dgpu-setup-for-ubuntu

From the log you attached, there might be something wrong with your file path. You can add the GST_DEBUG = 3 in front of your running command to open more log and attach the whole log.

Could you please explain how I can use GST debug as I tried to do it and I am not sure if I correctly used it
This is how I used it
I made a new cell
In there I wrote
!Gst-debug=3 python<my_file>.ipynb
And in output i got
traceback (most recent call last):
file “assessment.ipynb”, line 187, in
“scrolled”: true,
NameError: name ‘true’ is not defined
As I don’t have much time could you be a little quick
Thanks @yuweiw

You can run your command like below. Then attach the whole log.

GST_DEBUG=3 <your_python_command>

We still recommend that you install our latest version DeepStream 7.1 and learn some basic concepts about DeepStream through our Guide.

!GST_DEBUG=3 python3 assessment-Copy1.ipynb
Traceback (most recent call last):
File “assessment-Copy1.ipynb”, line 103, in
“scrolled”: true,
NameError: name ‘true’ is not defined

You should add the GST_DEBUG=3 in the front of your DeepStream running command, not the Jupyter Notebook .ipynb file.

GST_DEBUG=3 <your_DeepStream_program_command>
1 Like

thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.