Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson orin nano
• DeepStream Version version 6.3
• JetPack Version (valid for Jetson only) jetpack version 5.1.2
• TensorRT Version version 8.5.1
• CUDA version version 11.4
I have used this repository Deploy YOLOv8 with TensorRT and DeepStream SDK | Seeed Studio Wiki to create files
deepstream_app_config.txt and config_infer_primary.txt
File deepstream_app_config.txt
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
[tiled-display]
enable=1
rows=1
columns=1
width=1920
height=1080
gpu-id=0
nvbuf-memory-type=0
[source0]
enable=1
type=2
uri=file:///home/kodifly/DeepStream-Yolo/4_classes_yolov9m/DeepStream-Yolo/IMG_7640.MOV
num-sources=1
gpu-id=0
cudadec-memtype=0
[sink0]
enable=1
type=2
sync=0
gpu-id=0
nvbuf-memory-type=0
[osd]
enable=1
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0
[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt
[tests]
file-loop=0
File config_infer_primary.txt
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=yolov8m.cfg
model-file=yolov8m.wts
model-engine-file=model_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=0
num-detected-classes=4
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=0
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300
after that I am using a python file to run them
File run_deepstream_app.py
import subprocess
import re
deepstream_config_path = ‘/home/kodifly/DeepStream-Yolo/4_classes_yolov9m/DeepStream-Yolo/deepstream_app_config.txt’
def run_deepstream():
command = f’deepstream-app -c {deepstream_config_path}’
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
return process
def process_output(process):
try:
# Regular expression to capture class_id, class name, and bbox coordinates
detection_pattern = re.compile(r’Detected class ID: (\d+), name: (\w+), bbox: [(.*?)]')
for line in process.stdout:
if 'Detected class ID:' in line:
match = detection_pattern.search(line)
if match:
class_id, class_name, bbox = match.groups()
print(f'Class ID: {class_id}, Name: {class_name}, Bounding Box: {bbox}')
# Optionally, handle other log messages or errors
if 'Error' in line:
print(f'Error: {line.strip()}')
except Exception as e:
print(f"Error occurred while processing output: {e}")
if name == ‘main’:
# Run DeepStream pipeline
process = run_deepstream()
# Process the output from DeepStream
process_output(process)
# Optionally, wait for DeepStream to complete (if it runs until completion)
process.wait()
It is opening the video window and performing detection on the video window. I am trying to access the class_id, class_name and bounding_box coordinate for further post processing. I have tried it and not been able to extract it. Please suggest some solution or any refrence.