Inference and clustering thresholds: nvinfer vs nvinferserver

In nvinfer plugin, we have the options to set the inference model output confidence threshold (pre-cluster-threshold) and the threshold after clustering (post-cluster-threshold).
They can be set for all the classes (class-attrs-all) or refined for a specific model class (class-attrs-<class_ID> starting from ID 0):

[class-attrs-all]
pre-cluster-threshold=0.5
post-cluster-threshold=0.5

[class-attrs-0]
pre-cluster-threshold=0.3
post-cluster-threshold=0.5

[class-attrs-1]
pre-cluster-threshold=0.2
post-cluster-threshold=0.5

On nvinferserver plugin the pre_threshold can be found in per_class_params, which is an optional map<int32, PerClassParams>:

per_class_params { 
    { key: 1, value { pre_threshold : 0.4} }, 
    { key: 2, value { pre_threshold : 0.5} } 
} 

Doubt #1: Is there some way to set the pre_threshold just once for all the model’s classes?
Doubt #2: Does the class ID integer starts at 0 just like nvinfer, or starts at 1 as we can find in the nvinferserver example?

If some cluster type is enabled, I can find on each a threshold field that according to the descriptions seems to perform the same action as the previously referred pre_threshold:

NMS
confidence_threshold - Detection score lesser than this threshold would be rejected

DbScan
pre_threshold - Detection score lesser than this threshold would be rejected before DBSCAN clustering

GroupRectangle
confidence_threshold - Detection score lesser than this threshold would be rejected

SimpleCluster
threshold - Detection score lesser than this threshold would be rejected

Doubt #3: in practice the functionality of those fields produce the same effect as pre_threshold from DetectionParams?

Doubt #4: where can we find the equivalent for nvinfer’s post-cluster-threshold in nvinferserver?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

• DeepStream Version

• JetPack Version (valid for Jetson only)

• TensorRT Version

• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

• Hardware Platform (Jetson / GPU)
Any
• DeepStream Version
6.1.1 (latest)
• Issue Type( questions, new requirements, bugs)
Questions

@fanzh my doubts are mainly related with ensuring the consistency of properties when translating our internal inference config to nvinfer or nvinferserver gstreamer elements. Thank you

  1. no, currently, each model class has own pre_threshold , please refer to
  • <class_id:class_parameter> */
    map<int32, PerClassParams> per_class_params = 2;
  1. it starts from 0, please refer to /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton/config_infer_plan_engine_primary.txt

  2. yes, nms’s confidence_threshold, DbScan’s pre_threshold , GroupRectangle’s confidence_threshold, SimpleCluster’s threshold
    will be overwritten by per_class_params’s pre_threshold .

  3. currently, nvinferserver dose not support post-cluster-threshold, please refer to Gst-nvinferserver — DeepStream 6.1.1 Release documentation, these is another solution, you can output tensor, and process tensor by custom functions, please refer to sample deepstream-infer-tensor-meta-test.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.