Building Real-Time Video AI Applications Assessment --> Kernek Crush!

On Step 3: Edit the Inference Configuration File

[property]
gpu-id=0
net-scale-factor=1
tlt-model-key=tlt_encode
tlt-encoded-model=/dli/task/ngc_assets/dashcamnet_vpruned_v1.0
labelfile-path=/dli/task/dashcam_labels.txt
int8-calib-file=/dli/task/empty_calib_file.txt
input-dims=3;720;1280;0
uff-input-blob-name=input_1
batch-size=1
process-mode=1
model-color-format=0
network-mode=0
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
cluster-mode=0

On step 4. Build and Run DeepStream Pipeline

4.2

#Import necessary libraries
import sys
import gi
gi.require_version(‘Gst’, ‘1.0’)
from gi.repository import GObject, Gst, GLib
from common.bus_call import bus_call
import pyds

def run(input_file_path):
global inference_output
inference_output=
Gst.init(None)

# Create element that will form a pipeline
print("Creating Pipeline")
pipeline=Gst.Pipeline()

source = Gst.ElementFactory.make("filesrc", "file-source")
source.set_property('location', input_file_path)
h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")

streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
streammux.set_property('width', 1280)  # Set appropriate width
streammux.set_property('height', 720)  # Set appropriate height
streammux.set_property('batch-size', 1)  # Set appropriate batch size

pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
pgie.set_property('config-file-path', os.environ['SPEC_FILE'])

nvvidconv1 = Gst.ElementFactory.make("nvvideoconvert", "convertor")
nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
nvvidconv2 = Gst.ElementFactory.make("nvvideoconvert", "convertor2")
capsfilter = Gst.ElementFactory.make("capsfilter", "capsfilter")
caps = Gst.Caps.from_string("video/x-raw, format=I420")
capsfilter.set_property("caps", caps)

encoder=Gst.ElementFactory.make("avenc_mpeg4", "encoder")
encoder.set_property("bitrate", 2000000)

sink=Gst.ElementFactory.make("filesink", 'filesink')
sink.set_property('location', 'output.mpeg4')
sink.set_property("sync", 1)

# Add the elements to the pipeline
print("Adding elements to Pipeline")
pipeline.add(source)
pipeline.add(h264parser)
pipeline.add(decoder)
pipeline.add(streammux)
pipeline.add(pgie)
pipeline.add(nvvidconv1)
pipeline.add(nvosd)
pipeline.add(nvvidconv2)
pipeline.add(capsfilter)
pipeline.add(encoder)
pipeline.add(sink)

# Link the elements together
print("Linking elements in the Pipeline")
source.link(h264parser)
h264parser.link(decoder)
decoder.get_static_pad('src').link(streammux.get_request_pad("sink_0"))
streammux.link(pgie)  # Link streammux to primary inference
pgie.link(nvvidconv1)  # Link primary inference to nvvidconv1
nvvidconv1.link(nvosd)  # Link convertor to onscreendisplay
nvosd.link(nvvidconv2)
nvvidconv2.link(capsfilter)
capsfilter.link(encoder)
encoder.link(sink)

# Attach probe to OSD sink pad
osdsinkpad=nvosd.get_static_pad("sink")
osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

# Create an event loop and feed gstreamer bus mesages to it
loop=GLib.MainLoop()
bus=pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)

# Start play back and listen to events
print("Starting pipeline")

pipeline.set_state(Gst.State.PLAYING)
try:
    loop.run()
except:
    pass

pipeline.set_state(Gst.State.NULL)
return inference_output

4.3

def osd_sink_pad_buffer_probe(pad, info, u_data):
gst_buffer=info.get_buffer()

# Retrieve batch metadata from the gst_buffer
batch_meta=pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame=batch_meta.frame_meta_list
while l_frame is not None:
    
    # Initially set the tailgate indicator to False for each frame
    tailgate=False
    try:
        frame_meta=pyds.NvDsFrameMeta.cast(l_frame.data)
    except StopIteration:
        break
    frame_number=frame_meta.frame_num
    l_obj=frame_meta.obj_meta_list
    
    # Iterate through each object to check its dimension
    while l_obj is not None:
        try:
            obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
            
            # If the object meet the criteria then set tailgate indicator to True
            obj_bottom=obj_meta.rect_params.top+obj_meta.rect_params.height
            if (obj_meta.rect_params.left > FRAME_WIDTH * 0.3) & (obj_bottom > FRAME_HEIGHT * 0.9): 
                tailgate = True
                
        except StopIteration:
            break
        try: 
            l_obj=l_obj.next
        except StopIteration:
            break
    
    print(f'Analyzing frame {frame_number}', end='\r')
    inference_output.append(str(int(tailgate)))
    try:
        l_frame=l_frame.next
    except StopIteration:
        break
return Gst.PadProbeReturn.OK

4.4

tailgate_log=run(input_file_path=‘/dli/task/data/assessment_stream.h264’)

with open(‘/dli/task/my_assessment/answer_4.txt’, ‘w’) as f:
f.write(‘\n’.join(tailgate_log))

The output of the last cell (4.4) is :

reating Pipeline
Adding elements to Pipeline
Linking elements in the Pipeline
Starting pipeline

The Problem:

So the issue here is that after a second from “Starting the pipline”, the below popup appears. I tried restarting, stopping the entire assessment and re-execute the notebook, running the lab the next day, etc. Yet nothing worked.

Any ideas… ?

image (3)

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

• DeepStream Version

• JetPack Version (valid for Jetson only)

• TensorRT Version

• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

It is an assessment of one of NVIDIA courses ( Building Real-Time Video Applications in this case). Which means im running the notebook on NVIDIA’s environment. In other words, i m really sure how to answer. After all, the code i provided above is NVIDIA’s, i only made few changes in the required zones as the instructions indicated.

Why am i having this issue is what im trying to figure out by reaching out to experts like you for insights.

PS: i found nothing on the net about such problem.

can you please provide some hardware and software information mentioned in my first comment? could you simplify the code to narrow down this issue? can the simplest sample test1 run well?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.