Thanks for the response. I have checked the source you have mentioned i.e, test-case-2. I have constructed our test case from test-case-2 only. So in this case instead of a secondary classifier, I am using a secondary detector which only infers on the output of primary detector. In the test-case-2 inside the function, osd_sink_pad_buffer_probe(), we pick coordinates of objects using the below code,
while l_obj is not None:
try:
# Casting l_obj.data to pyds.NvDsObjectMeta
obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
class_id=obj_meta.class_id
obj_id=obj_meta.object_id
obj_conf=obj_meta.confidence
rect_params=obj_meta.rect_params
top=int(rect_params.top)
left=int(rect_params.left)
width=int(rect_params.width)
height=int(rect_params.height)
When I use the above snippet, I can access the results of primary detector. In the similar manner, I want to access the coordinates of secondary detector. I don’t know how I can access it.
My application is simple, I am detecting a vehicle from a frame and then the license plate from the cropped vehicle. I am able to access the vehicle bounding boxes but unable to find the syntax for accessing the bounding box coordinates of the detected number plate from the vehicle.
Below is my pipeline and also the way I link up them.
pipeline.add(pgie)
pipeline.add(tracker)
pipeline.add(sgie)
pipeline.add(tiler)
pipeline.add(nvvidconv)
pipeline.add(filter1)
pipeline.add(nvvidconv1)
pipeline.add(nvosd)
if is_aarch64():
pipeline.add(transform)
pipeline.add(sink)
streammux.link(pgie)
pgie.link(tracker)
tracker.link(sgie)
sgie.link(nvvidconv1)
nvvidconv1.link(filter1)
filter1.link(tiler)
tiler.link(nvvidconv)
nvvidconv.link(nvosd)
if is_aarch64():
nvosd.link(transform)
transform.link(sink)
else:
filter1.link(sink)
FYI, I am adding the secondary detector config file below,
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
tlt-model-key=tlt_encode
tlt-encoded-model=./models/cnp_detector.etlt
labelfile-path=./models/labels_cnp.txt
model-engine-file=./models/cnp_detector.etlt_b16_gpu0_fp16.engine
input-dims=3;256;320;0
uff-input-blob-name=input_1
force-implicit-batch-dim=1
batch-size=16
process-mode=2
model-color-format=0
network-mode=2
num-detected-classes=1
gie-unique-id=2
operate-on-gie-id=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
[class-attrs-all]
pre-cluster-threshold=0.2
group-threshold=1
eps=0.2
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0