Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson • DeepStream Version 6.2 • JetPack Version (valid for Jetson only) Jetpack 5.1 [L4T 35.2.1] • TensorRT Version 8.5.2 • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) Bug • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
The Custom NMS Function for TAO applications NvDsInferParseCustomBatchedNMSTLT only takes into account the perClassThreshold of the first class and uses it for all the others.
The expected behaviour is that it uses the pre-cluster threshold of the appropriate class for the bounding-box label.
I agree that the objectList vector contains all objects of all classes. However, every element in the vector is filtered based on const float threshold = detectionParams.perClassThreshold[0]; (line 216). This means that all classes are filtered using the threshold of a single class, rather than the threshold of the predicted class.
Additionally, detectionParams includes the parameter perClassPreclusterThreshold. As this threshold-based filtering is applied before NMS, I also propose that this variable is used instead.
I propose that line 216 be removed, and line 228 be altered as follows:
- if ( p_scores[i] < threshold) continue;
+ if ( p_scores[i] < detectionParams.perClassPreclusterThreshold[(unsigned int) p_classes[i]]) continue;
Thank you for addressing this issue. This is to simplify the code logic. Currently, all class thresholds are the same value that read from the config file.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
Thanks. Cause The postprocess here is open source code, you can add it by yourself at present.