Regading issue of dsexample of adding opencv process with pipeline

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)-Jetson Orin
• DeepStream Version - 6.4
• JetPack Version (valid for Jetson only) - 6.0-b52
• TensorRT Version - 8.6.2
• NVIDIA GPU Driver Version (valid for GPU only)-12.4
• Issue Type( questions, new requirements, bugs)-when we process custom opencv code in deepstream pipeline dsexample not able to connect with process_buffer
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

pipeline_str = “”"
filesrc location=“WhatsApp Video 2023-10-15 at 1.38.28 PM.mp4” ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=config_infer_primary_yoloV8.txt ! nvvideoconvert ! dsexample! nvdsosd ! nv3dsink
“”"

Initialize GStreamer

Gst.init(None)

Create the pipeline

pipeline = Gst.parse_launch(pipeline_str)
pipeline.set_state(Gst.State.PLAYING)

Define the function to handle the dsexample plugin’s buffer

def process_buffer(pad, info):
# Extract the GstBuffer from the GstPad
buffer = info.get_buffer()
# Map the buffer for reading
success, map_info = buffer.map(Gst.MapFlags.READ)
if not success:
print(“Failed to map buffer”)
return
# Extract data from the buffer and process it with OpenCV
data = map_info.data
frame = cv2.imdecode(np.frombuffer(data, dtype=np.uint8), cv2.IMREAD_COLOR)
if frame is not None:
# Apply the OpenCV processing
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower_magenta = np.array([140, 50, 50])
upper_magenta = np.array([170, 255, 255])
mask = cv2.inRange(hsv, lower_magenta, upper_magenta)
contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
max_contour = max(contours, key=cv2.contourArea)
x, y, w, h = cv2.boundingRect(max_contour)
cropped = frame[y:y+h, x:x+w]
# Display the processed frame
cv2.imshow(“Processed Frame”, cropped)
cv2.waitKey(1)
else:
print(“Failed to decode frame from buffer”)
# Unmap the buffer
buffer.unmap(map_info)

Get the dsexample element from the pipeline

dsexample = pipeline.get_by_name(“dsexample”)

Connect the pad-added signal to the process_buffer function

dsexample.connect(“pad-added”, process_buffer)

Start the main loop

loop = GObject.MainLoop()
try:
loop.run()
except KeyboardInterrupt:
pipeline.set_state(Gst.State.NULL)

jetson@ubuntu:/media/jetson/CCCOMA_X64FRE_EN-GB_DV9/pandian/people_count/people_count$ python3 deepstream_multi_cam_1.py
Opening in BLOCKING MODE
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:06.910153198 13956 0xaaaafc4718f0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/media/jetson/CCCOMA_X64FRE_EN-GB_DV9/pandian/people_count/people_count/model_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x640x640
1 OUTPUT kFLOAT boxes 8400x4
2 OUTPUT kFLOAT scores 8400x1
3 OUTPUT kFLOAT classes 8400x1

0:00:07.307440547 13956 0xaaaafc4718f0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /media/jetson/CCCOMA_X64FRE_EN-GB_DV9/pandian/people_count/people_count/model_b1_gpu0_fp32.engine
0:00:07.319468330 13956 0xaaaafc4718f0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_yoloV8.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
Traceback (most recent call last):
File “/media/jetson/CCCOMA_X64FRE_EN-GB_DV9/pandian/people_count/people_count/deepstream_multi_cam_1.py”, line 535, in
dsexample.connect(“pad-added”, process_buffer)
AttributeError: ‘NoneType’ object has no attribute ‘connect’

dsexample is opensource. it has no “connect” attribute and “pad-added” signal.

is it possible add opencv code into deepstream pipeline after it displays the opencv operation output

is it anyother way to integreate opencv code into deepstream piepline instread of using dsexample

if you don’t want to use dsexample, you can create a new plugin. dsexample is a sample plugin to use opencv.

can you give any reference to implement in python deep stream

import gi
gi.require_version(‘Gst’, ‘1.0’)
gi.require_version(‘GstBase’, ‘1.0’)
from gi.repository import GObject, Gst, GstBase

class CustomPlugin(GstBase.BaseTransform):
gstmetadata = (‘CustomPlugin’,
‘Filter/Effect’,
‘Custom plugin for DeepStream’,
‘Your Name your.email@example.com’)

__gsttemplates__ = (Gst.PadTemplate.new("src",
                                         Gst.PadDirection.SRC,
                                         Gst.PadPresence.ALWAYS,
                                         Gst.Caps.new_any()),
                    Gst.PadTemplate.new("sink",
                                         Gst.PadDirection.SINK,
                                         Gst.PadPresence.ALWAYS,
                                         Gst.Caps.new_any()))

def __init__(self):
    super().__init__()

def do_transform(self, inbuf, outbuf):
    # Implement image processing logic using OpenCV here
    return Gst.FlowReturn.OK

GObject.type_register(CustomPlugin)

def plugin_init(plugin):
return Gst.Element.register(plugin, ‘customplugin’, Gst.Rank.NONE, CustomPlugin)

Gst.init(None)
plugin_init(None)

should we implement like this?

please refer to this link for how to How to write GStreamer elements in python.

can it be integrate anywhere in deepsteam python pipeline ?

and one more dobut after extracting batch of stream from rtso is it possible to send rtsp stream to custom python plugin for opencv processing

can you tell this formation to me and give can you give any reference for this this?

it will be more helpful for implementing to me

please refer to this code , test3 can support rtsp source. after decoding, you can process the raw data by your own plugin for opencv processing.

after creating custom plugin for opencv processsing then the output will display on tiler

could you elaborate on the question? Thanks!

We have to implement opencv processing in deepstream python pipeline after extracing batch of stream from rtsp then it should process opencv operation after that i has to be displayted on deepstream tiler plugin

please refer to test3, it can support tiler and osd.

ok thank you so much

i have even one more doubt also

i wanted to stitch two camera without overlaping two camera region is there any best methodoloy available for do this ?In deepstream or any algorithm

nvmultistreamtiler can stitch multiple videos to one video. please refer to the doc. please refer to this sample.

      gst-launch-1.0 filesrc  location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! mux.sink_0  \
	   filesrc  location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! mux.sink_1 \
   nvstreammux name=mux batch-size=2 width=1920 height=1080 nvbuf-memory-type=3 ! nvmultistreamtiler !  nveglglessink

ok tq


when look at this above two camera picture and this two camera regions are overlaping so our aim is to stitch image without overlap multiple camera region
hence can you help me to resolvle the issue and we were tried some of the techique like opencv and panorama but this methodology does not work more well

sorry for the late reply! please refer to these two topics topic1, topic2.