• Hardware Platform (Jetson / GPU) :Jetson
• DeepStream Version:5.1
**• JetPack Version (valid for Jetson only)**4.5
• TensorRT Version:7.1.3
Hi guys,
Suppose I create a simple pipeline for musti-stream rtsp source like this:
Solution 1: >> one branch
Sources | streammux | nvinfer | custom_plugin | nvosd | fakesink
Solution 2: >> two branch
Sources | streammux | nvinfer | streamdemuxer | tee name=t | t. custum_plugin -name=b1| nvosd | fakesink | t. custom_plugin -name=b2 | nvosd | fakesink
The aim of custom plugin is that for capture metadata of buffer like this:
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
try:
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
except StopIteration:
break
frame_number = frame_meta.frame_num
l_obj = frame_meta.obj_meta_list
while l_obj is not None:
try:
obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
except StopIteration:
break
try:
l_obj = l_obj.next
except StopIteration:
break
l_frame = l_frame.next
except StopIteration:
break
So in the solution 1 we have loop as many streams and only once do:
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
but in the solution 2 we had multi branches as many streams and also we get multiple batch_meta as many streams.
I want to know which of solutions can be efficient in terms of speed and resources used?
I want to know the below lines have overhead if we call multiple times in each of branches or it’s best to have one branch and only call once of them?
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)