How to get bounding box coordinates of a secondary detector which operates on outputs of primary detector in deepstream python?

I have two detectors, namely primary and secondary. Secondary detector operates on the outputs of first one and produces bounding boxes. Using the test cases provided, I am able to access the bounding box coordinates of primary one but I am unable to access bounding box coordinates for the second detector. I used primary detector, added tracker, and secondary detector after them.

Could someone help me in finding the syntax for accessing box coordinates?

Can someone please help me in resolving this?

you can share config file sgie model?

/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test2

i think you need check this source more info

Thanks for the response. I have checked the source you have mentioned i.e, test-case-2. I have constructed our test case from test-case-2 only. So in this case instead of a secondary classifier, I am using a secondary detector which only infers on the output of primary detector. In the test-case-2 inside the function, osd_sink_pad_buffer_probe(), we pick coordinates of objects using the below code,

while l_obj is not None:
try:
# Casting l_obj.data to pyds.NvDsObjectMeta
obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
class_id=obj_meta.class_id
obj_id=obj_meta.object_id
obj_conf=obj_meta.confidence
rect_params=obj_meta.rect_params
top=int(rect_params.top)
left=int(rect_params.left)
width=int(rect_params.width)
height=int(rect_params.height)

When I use the above snippet, I can access the results of primary detector. In the similar manner, I want to access the coordinates of secondary detector. I don’t know how I can access it.

My application is simple, I am detecting a vehicle from a frame and then the license plate from the cropped vehicle. I am able to access the vehicle bounding boxes but unable to find the syntax for accessing the bounding box coordinates of the detected number plate from the vehicle.

Below is my pipeline and also the way I link up them.

pipeline.add(pgie)
pipeline.add(tracker)
pipeline.add(sgie)
pipeline.add(tiler)
pipeline.add(nvvidconv)
pipeline.add(filter1)
pipeline.add(nvvidconv1)
pipeline.add(nvosd)
if is_aarch64():
pipeline.add(transform)
pipeline.add(sink)

streammux.link(pgie)
pgie.link(tracker)
tracker.link(sgie)
sgie.link(nvvidconv1)
nvvidconv1.link(filter1)
filter1.link(tiler)
tiler.link(nvvidconv)
nvvidconv.link(nvosd)
if is_aarch64():
nvosd.link(transform)
transform.link(sink)
else:
filter1.link(sink)

FYI, I am adding the secondary detector config file below,

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
tlt-model-key=tlt_encode
tlt-encoded-model=./models/cnp_detector.etlt
labelfile-path=./models/labels_cnp.txt
model-engine-file=./models/cnp_detector.etlt_b16_gpu0_fp16.engine
input-dims=3;256;320;0
uff-input-blob-name=input_1
force-implicit-batch-dim=1
batch-size=16
process-mode=2
model-color-format=0
network-mode=2
num-detected-classes=1
gie-unique-id=2
operate-on-gie-id=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid

[class-attrs-all]
pre-cluster-threshold=0.2
group-threshold=1
eps=0.2
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

if sgie you model is dector type, you can use source back-to-back-detectors

You can refer deepstream_reference_apps/back-to-back-detectors at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub as @PhongNT mentioned