Bounding box confidence filtering at NvDsInferParseCustomEfficientNMS and after

Hello everybody!

I am running a deepstream pipeline for inference with yolov7 and i am using NvDsInferParseCustomEfficientNMS to parse the output of the NMS plug in to deepstream format. I want to ask

  1. Where the NvDsInferParseCustomEfficientNMS takes the confidence from to filter (because for a reason it is 0.2 while the .trt is builded with 0 confidence)?
  2. Also i hardcoded the confidence there to be less than 0.2 and the bounding boxes seems to pass from this parser function, but i am not able to get them at at the tiler_sink_pad_buffer_prob where i extract the metadata.
    Can someone explain me how is this possible and if there are additional steps in between?

Thank you very much

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

• DeepStream Version

• JetPack Version (valid for Jetson only)

• TensorRT Version

• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

• Hardware Platform (Jetson / GPU) Jetson Xavier
• DeepStream Version DS 6.0
• JetPack Version (valid for Jetson only) 4.6 (L4T 32.6.1)
• TensorRT Version 8.0.1
**• Issue Type This is a question to better understand the inference pipeline pre and post processing and when the custom efficient nms parser is called (before the post process?) and which threshold it uses?

please refer to this code. the configuration “pre-cluster-threshold” will become that perClassThreshold.

1 Like