Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU Tesla T4
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 450.119.03
• Issue Type( questions, new requirements, bugs) questions, (bugs?)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
deepstream-ssd-parser sample app. Trying to add secondary inference (a multi-label classifier using nvinferserver) on primary detection objects. The pipeline works i.e. no issues with model loading and running the pipeline. But not able to get any classifier objects on using a probe function at either sgie source or nvvidconv sink (next element). The obj_meta.obj_user_meta_list
is always None
. My probe function -
def sgie1_src_pad_buffer_probe(pad, info, u_data):
gst_buffer = info.get_buffer()
if not gst_buffer:
print("Unable to get GstBuffer ")
return
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
try:
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
except StopIteration:
break
l_obj = frame_meta.obj_meta_list
count = frame_meta.num_obj_meta
while l_obj is not None:
try:
obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
except StopIteration:
break
l_class = obj_meta.obj_user_meta_list
print(l_class)
while l_class is not None:
l_user = pyds.NvDsUserMeta.cast(l_class.data)
if (
l_user.base_meta.meta_type
!= pyds.NvDsMetaType.NVDSINFER_TENSOR_OUTPUT_META
):
continue
tensor_meta = pyds.NvDsInferTensorMeta.cast(l_user.user_meta_data)
layers_info = []
for i in range(tensor_meta.num_output_layers):
layer = pyds.get_nvds_LayerInfo(tensor_meta, i)
layers_info.append(layer)
frame_object_list = nvds_infer_parse_custom_tf(layers_info, count)
for frame_object in frame_object_list:
add_classifier_obj_meta_to_frame(frame_object, batch_meta, obj_meta)
try:
l_class = l_class.next
except StopIteration:
break
try:
l_obj = l_obj.next
except StopIteration:
break
try:
l_frame = l_frame.next
except StopIteration:
break
return Gst.PadProbeReturn.OK
My pipeline-
streammux.link(queue1)
queue1.link(pgie)
pgie.link(sgie1)
sgie1.link(nvvidconv)
nvvidconv.link(nvosd)
nvosd.link(queue5)
queue5.link(nvvidconv2)
nvvidconv2.link(capsfilter)
capsfilter.link(encoder)
encoder.link(codeparser)
codeparser.link(container)
container.link(sink)
Same probe function I have tried while keeping peoplenet TLT model as the primary inference using nvinfer and it works great, I get the output tensors of the classifier model.
Is this some issue when we add object meta to frame as it is there in the deepstream-ssd-parser
example and this object does not go as input to the secondary classifier?
I am confused here, whether secondary inference is happening at all as I am not getting any errors anywhere but also can’t read the output.
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)