• Hardware Platform: Jetson Nano Production Module on Nvidia carrier board
• DeepStream 5.0
• JetPack Version 4.4.1-b50
• TensorRT Version 7.1.3-1+cuda10.2
• Issue Type = questions: I’m trying to setup a full frame classifier using custom CUDA engine generated from TensorRT network definition. I am probing the nvinfer element at it’s source pad and would like to extract class_id & confidence from the buffer. However, NvDsObjectMeta.class_id returns -1 & NvDsObjectMeta.confidence returns 0.0. I need help understanding what I did wrong.
• How to reproduce the issue? Please see below gie_config & src_pad_buffer_probe method:
[property]
num-detected-classes=5
net-scale-factor=0.0039215686274509804
batch-size=1
labelfile-path=…/models/labels.txt
model-engine-file=…/models/MasterPlan/MasterPlan.engine
gie-unique-id=1
operate-on-gie-id=1
operate-on-class-ids=0;1;2;3;4
model-color-format=0
process-mode=1
classifier-threshold=0.01
network-type=1 # classifier
parse-classifier-func-name=NvDsInferClassiferParseCustomSoftmax
custom-lib-path=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_infercustomparser.so
def src_pad_buffer_probe(pad, info, u_data):
gst_buffer = info.get_buffer()
if not gst_buffer:
print("Unable to get GstBuffer ")
return
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
try:
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
except StopIteration:
break
l_obj=frame_meta.obj_meta_list
while l_obj is not None:
try:
obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
except StopIteration:
break
message = f’{u_data} {obj_meta.class_id} {obj_meta.confidence}’
print(message)
# s.sendall(message.encode())
time.sleep(1.)
try:
l_obj=l_obj.next
except StopIteration:
break
try:
l_frame=l_frame.next
except StopIteration:
break
return Gst.PadProbeReturn.OK
Happy to share whatever other information is needed, thank you.