Overview
- I have built a DeepStream pipeline (in Python) that begins with two
appsrc
elements, usestreammux
,nvinfer
for batch-processing, and atiled display
. - When I run the script, it prints out
Succesfully handled EOS for both appsrc
s although frames are being pushed continously. - No inference is performed.
- Any help will be appreciated!
Environment
• Hardware Platform (Jetson / GPU)
nvcr.io/nvidia/deepstream:7.0-gc-triton-devel
x86
+RTX3090ti
• DeepStream Version
7.0
• NVIDIA GPU Driver Version (valid for GPU only)
550.78
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
$ docker run \
-it \
--privileged \
--name ds7_3 \
--net=host \
--gpus all \
-e DISPLAY=$DISPLAY \
--shm-size=16G \
-v /hdd/data:/Dataset \
-v /dev/video*:/dev/video* \
nvcr.io/nvidia/deepstream:7.0-gc-triton-devel
# Compile YOLO
$ /opt/nvidia/deepstream/deepstream-7.0/sources/objectDetector_Yolo
$ export CUDA_VER=12.2
$ make -C nvdsinfer_custom_impl_Yolo
# Install DeepStream-Python
$ cd /opt/nvidia/deepstream/deepstream-7.0
$ ./user_deepstream_python_apps_install.sh -v 1.1.11
# Create a virtual env and install pyds.whl
$ cd sources/deepstream_python_apps
$ apt-get install python3.10-venv
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install pyds-1.1.11-py3-none-linux_x86_64.whl
$ pip install cuda-python
main.py
import cv2
import gi
import numpy as np
import sys
import time
from termcolor import cprint
gi.require_version("Gst", "1.0")
from gi.repository import Gst, GLib
sys.path.append("/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps")
from common.bus_call import bus_call
def push_frame_to_gstreamer(appsrc, frame):
if appsrc is None:
print("appsrc is None")
return
data = frame.tobytes()
buffer = Gst.Buffer.new_wrapped(data)
buffer.pts = Gst.util_uint64_scale(time.time(), Gst.SECOND, 1)
buffer.duration = Gst.util_uint64_scale(1, Gst.SECOND, 30)
retval = appsrc.emit("push-buffer", buffer)
if retval != Gst.FlowReturn.OK:
cprint("Error pushing buffer to appsrc", "red")
# Link the elements
def link_elements(element1, element2):
if element1.link(element2):
cprint(f"Linked {element1.name} -> {element2.name}", "green")
else:
cprint(f"Failed to link {element1.name} -> {element2.name}", "red")
# Initialize GStreamer
Gst.init(None)
# Create individual elements
pipeline = Gst.Pipeline()
src1 = Gst.ElementFactory.make("appsrc", "src1")
src2 = Gst.ElementFactory.make("appsrc", "src2")
videoconvert1 = Gst.ElementFactory.make("videoconvert", "videoconvert1")
videoconvert2 = Gst.ElementFactory.make("videoconvert", "videoconvert2")
capsfilter1 = Gst.ElementFactory.make("capsfilter", "capsfilter1")
capsfilter2 = Gst.ElementFactory.make("capsfilter", "capsfilter2")
queue1 = Gst.ElementFactory.make("queue", "queue1")
queue2 = Gst.ElementFactory.make("queue", "queue2")
queue3 = Gst.ElementFactory.make("queue", "queue3")
queue4 = Gst.ElementFactory.make("queue", "queue4")
nvstreammux = Gst.ElementFactory.make("nvstreammux", "nvstreammux")
nvinfer = Gst.ElementFactory.make("nvinfer", "nvinfer")
nvmultistreamtiler = Gst.ElementFactory.make("nvmultistreamtiler", "nvmultistreamtiler")
nvvideoconvert = Gst.ElementFactory.make("nvvideoconvert", "nvvideoconvert")
nvdsosd = Gst.ElementFactory.make("nvdsosd", "nvdsosd")
nveglglessink = Gst.ElementFactory.make("nveglglessink", "nveglglessink")
if not all(
[
pipeline,
src1,
src2,
videoconvert1,
videoconvert2,
capsfilter1,
capsfilter2,
queue1,
queue2,
queue3,
queue4,
nvstreammux,
nvinfer,
nvmultistreamtiler,
nvvideoconvert,
nvdsosd,
nveglglessink,
]
):
cprint("Failed to create elements", "red")
exit(1)
# Set properties for elements
caps = Gst.Caps.from_string(
"video/x-raw,format=NV12,width=800,height=600,framerate=30/1"
)
capsfilter1.set_property("caps", caps)
capsfilter2.set_property("caps", caps)
src1.set_property(
"caps",
Gst.Caps.from_string("video/x-raw,format=BGR,width=800,height=600,framerate=30/1"),
)
src2.set_property(
"caps",
Gst.Caps.from_string("video/x-raw,format=BGR,width=800,height=600,framerate=30/1"),
)
src1.set_property("format", Gst.Format.TIME)
src2.set_property("format", Gst.Format.TIME)
src1.set_property("is-live", True)
src2.set_property("is-live", True)
nvstreammux.set_property("width", 800)
nvstreammux.set_property("height", 600)
nvstreammux.set_property("batch-size", 2)
nvstreammux.set_property("batched-push-timeout", 4000000)
nvinfer.set_property("config-file-path", "config_infer_primary_yoloV3.txt")
nvmultistreamtiler.set_property("rows", 1)
nvmultistreamtiler.set_property("columns", 2)
nvmultistreamtiler.set_property("width", 1600)
nvmultistreamtiler.set_property("height", 600)
# Add elements to the pipeline
pipeline.add(src1)
pipeline.add(videoconvert1)
pipeline.add(capsfilter1)
pipeline.add(queue1)
pipeline.add(src2)
pipeline.add(videoconvert2)
pipeline.add(capsfilter2)
pipeline.add(queue2)
pipeline.add(nvstreammux)
pipeline.add(queue3)
pipeline.add(nvinfer)
pipeline.add(nvmultistreamtiler)
pipeline.add(queue4)
pipeline.add(nvvideoconvert)
pipeline.add(nvdsosd)
pipeline.add(nveglglessink)
link_elements(src1, videoconvert1)
link_elements(videoconvert1, capsfilter1)
link_elements(capsfilter1, queue1)
link_elements(src2, videoconvert2)
link_elements(videoconvert2, capsfilter2)
link_elements(capsfilter2, queue2)
# Request and link nvstreammux pads
sinkpad1 = nvstreammux.get_request_pad('sink_0')
srcpad1 = queue1.get_static_pad('src')
if srcpad1.link(sinkpad1) == Gst.PadLinkReturn.OK:
cprint("Linked queue1 -> nvstreammux.sink_0", "green")
else:
cprint("Failed to link queue1 -> nvstreammux.sink_0", "red")
sinkpad2 = nvstreammux.get_request_pad('sink_1')
srcpad2 = queue2.get_static_pad('src')
if srcpad2.link(sinkpad2) == Gst.PadLinkReturn.OK:
cprint("Linked queue2 -> nvstreammux.sink_1", "green")
else:
cprint("Failed to link queue2 -> nvstreammux.sink_1", "red")
link_elements(nvstreammux, queue3)
link_elements(queue3, nvinfer)
link_elements(nvinfer, nvmultistreamtiler)
link_elements(nvmultistreamtiler, queue4)
link_elements(queue4, nvvideoconvert)
link_elements(nvvideoconvert, nvdsosd)
link_elements(nvdsosd, nveglglessink)
# Create an event loop and feed GStreamer bus messages to it
loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)
# Start the pipeline
pipeline.set_state(Gst.State.PLAYING)
import glob
files = glob.glob("./images/*")
l = len(files)
try:
while True:
idx = np.random.randint(0, l)
frame0 = cv2.imread(files[idx])
idx = np.random.randint(0, l)
frame1 = cv2.imread(files[idx])
# Resize to 800x600
frame0 = cv2.resize(frame0, (800, 600))
frame1 = cv2.resize(frame1, (800, 600))
if frame0 is not None and frame1 is not None:
cprint("Pushed", "green")
push_frame_to_gstreamer(src1, frame0)
push_frame_to_gstreamer(src2, frame1)
except KeyboardInterrupt:
cprint("Stopping frame grabbing due to user interrupt.", "yellow")
pipeline.set_state(Gst.State.NULL)
config_infer_primary_yoloV3.txt
####################################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2029-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
#
# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
# property and proprietary rights in and to this material, related
# documentation and any modifications thereto. Any use, reproduction,
# disclosure or distribution of this material and related documentation
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
####################################################################################################
# Following properties are mandatory when engine files are not specified:
# int8-calib-file(Only in INT8), model-file-format
# Caffemodel mandatory properties: model-file, proto-file, output-blob-names
# UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
# ONNX: onnx-file
#
# Mandatory properties for detectors:
# num-detected-classes
#
# Optional properties for detectors:
# cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
# custom-lib-path
# parse-bbox-func-name
#
# Mandatory properties for classifiers:
# classifier-threshold, is-classifier
#
# Optional properties for classifiers:
# classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
# operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
# input-object-min-width, input-object-min-height, input-object-max-width,
# input-object-max-height
#
# Following properties are always recommended:
# batch-size(Default=1)
#
# Other optional properties:
# net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
# model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
# mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
# custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#0=RGB, 1=BGR
model-color-format=0
custom-network-config=/opt/nvidia/deepstream/deepstream-7.0/sources/objectDetector_Yolo/yolov3.cfg
model-file=/opt/nvidia/deepstream/deepstream-7.0/sources/objectDetector_Yolo/yolov3.weights
#model-engine-file=model_b2_gpu0_int8.engine
#model-engine-file=model_b1_gpu0_int8.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-7.0/sources/objectDetector_Yolo/labels.txt
int8-calib-file=/opt/nvidia/deepstream/deepstream-7.0/sources/objectDetector_Yolo/yolov3-calibration.table.trt7.0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=80
gie-unique-id=1
network-type=0
is-classifier=0
## 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV3
#parse-bbox-func-name=NvDsInferParseCustomYoloV3_cuda
custom-lib-path=/opt/nvidia/deepstream/deepstream-7.0/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
#scaling-filter=0
#scaling-compute-hw=0
disable-output-host-copy=0
[class-attrs-all]
nms-iou-threshold=0.3
threshold=0.7
OUTPUT
Unknown or legacy key specified 'is-classifier' for group [property]
Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.
Linked src1 -> videoconvert1
Linked videoconvert1 -> capsfilter1
Linked capsfilter1 -> queue1
Linked src2 -> videoconvert2
Linked videoconvert2 -> capsfilter2
Linked capsfilter2 -> queue2
main.py:138: DeprecationWarning: Gst.Element.get_request_pad is deprecated
sinkpad1 = nvstreammux.get_request_pad('sink_0')
Linked queue1 -> nvstreammux.sink_0
Linked queue2 -> nvstreammux.sink_1
Linked nvstreammux -> queue3
Linked queue3 -> nvinfer
Linked nvinfer -> nvmultistreamtiler
Linked nvmultistreamtiler -> queue4
Linked queue4 -> nvvideoconvert
Linked nvvideoconvert -> nvdsosd
Linked nvdsosd -> nveglglessink
Deserialize yoloLayerV3 plugin: yolo_83
Deserialize yoloLayerV3 plugin: yolo_95
Deserialize yoloLayerV3 plugin: yolo_107
0:00:03.531772182 3031015 0x559257e14b00 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<nvinfer> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 1]: deserialized trt engine from : model_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT data 3x608x608
1 OUTPUT kFLOAT yolo_83 255x19x19
2 OUTPUT kFLOAT yolo_95 255x38x38
3 OUTPUT kFLOAT yolo_107 255x76x76
0:00:03.621322511 3031015 0x559257e14b00 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<nvinfer> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: model_b1_gpu0_int8.engine
0:00:03.625012156 3031015 0x559257e14b00 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<nvinfer> [UID 1]: Load new model:config_infer_primary_yoloV3.txt sucessfully
Pushed
nvstreammux: Successfully handled EOS for source_id=0
nvstreammux: Successfully handled EOS for source_id=1
Pushed
Pushed
Pushed
Pushed
...