Replicate secondary gie in deepstream-app

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only) 495.44
• Issue Type( questions, new requirements, bugs) Question

Hi, i followed this link and implementing the PGIE+SGIE in deepstream-app.

I was wondering how can i replicate “sgie_pad_buffer_probe” function in deepstream-app?

As in deepstream-app, by enabling “process-mode=2” in config file, it acts as secondary detector/classifier. I want to use the output of secondary inference and parse the output (like landmarks).

The link demonstrates-

  1. pgie_pad_buffer_probe → does the primary inferencing and generates the face boxes
  2. sgie_pad_buffer_probe → does secondary inferencing and attaching the landamarks as meta_data.

But deepstream-app setup is a bit different. the secondary bin is created automatically when [secondary-gie0] is enabled in config file. Then where to write the code to get the secondary detector output ?

Please provide the steps how i can implement the same?


Seems sample “apps/sample_apps/deepstream-test2” is closer to your requirement, you can attach the probe to the source of the secondary nvinfer.

Thanks for the reply.

Or i can use “process_meta” function in deepstream-app.c ?

Hi, @yingliu ,

i got success in implementing it in deepstream-app.

JFI, I used process_meta and overlay_graphics APIs to achieve that.

Thanks for the support.

Just for verification, can you confirm If above mentioned APIs are correct for the same? as sometime i get landmarks of eyebrows a bit off.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.