Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson Xavier AGX
• DeepStream Version 6.0.1
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8.2.1
I am trying to the object detection confidence to the bounding box detection from the tracking sample gpubootcamp/Introduction_to_Multi-DNN_pipeline.ipynb at a647a2c3fc75828cbbf1cbd5ab29f865c491a35c · openhackathons-org/gpubootcamp · GitHub . The labels show the object class and the tracking ID, but I don’t understand how this information gets there. See image bellow:
I see that osd_sink_pad_buffer_probe(pad,info,u_data)
displays the information on the image frame, but I can’t see where it states that class and tracker id must be placed at the top of the bounding box. And I want to include the object confidence.
I know I can get the confidence from object in this loop, by accessing obj_meta.confidence
while l_obj is not None:
try:
# Casting l_obj.data to pyds.NvDsObjectMeta
obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
except StopIteration:
break
obj_counter[obj_meta.class_id] += 1
obj_meta.rect_params.border_color.set(0.0, 0.0, 1.0, 1.0)
obj_meta.text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)
obj_meta.text_params.text_bg_clr.set(0.0, 0.0, 1.0, 1.0)
# Can get confidence by accessing obj_meta.confidence
try:
l_obj = l_obj.next
except StopIteration:
break
Therefore, how I can define what is shown in the bounding box text?
Thanks in advance,
Flávio Mello