Tao inference detectnet v2 clustering - large differences between NMS and DBSCAN

Hello! I’ve been using tao-deploy python inference scripts to conduct inference in python. I have trained trafficcamnet on my own data, exported the model to onnx, and then deployed the model to a tensorrt engine using the tao deploy API. I have a few questions about clustering. I am using the following config:

clustering_config {
        clustering_algorithm: DBSCAN
        dbscan_confidence_threshold: 0.1
        coverage_threshold: 0.001
        dbscan_eps: 0.15
        dbscan_min_samples: 1
        nms_iou_threshold: 0.5
        nms_confidence_threshold: 0.1
        minimum_bounding_box_height: 10
      }

When I set clustering_algorithm to NMS, I see the following output:

I understand that the confidence associated with each bbox is same as the coverage tensor for that bbox. This doesn’t make sense to me. Shouldn’t this figure denote the model’s confidence in the prediction not the number of grid cells covered by the object?

When I set the clustering_algortim to DBSCAN, I see the following output:

First thing I notice is that the output is radically different. Why does DBSCAN not select nearly as many of the post-processed bounding boxes as NMS? Is there something obvious in my config which is leading to this output?

Additionally, you can now see the that the confidence figures do not match coverage tensor. What do these figures represent?
Thanks!

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This confidence comes from cov tensor.
cov: weights for the rectangles, (num_imgs, 1, W, H) array
https://github.com/NVIDIA/tao_deploy/blob/main/nvidia_tao_deploy/cv/detectnet_v2/utils.py#L75.

covs (np.ndarray): The raw coverage predictions from the network.
https://github.com/NVIDIA/tao_deploy/blob/main/nvidia_tao_deploy/cv/detectnet_v2/utils.py#L284.

scores (np.ndarray): Array of filtered scores (coverages).
https://github.com/NVIDIA/tao_deploy/blob/main/nvidia_tao_deploy/cv/detectnet_v2/utils.py#L173.

scores = covs
https://github.com/NVIDIA/tao_deploy/blob/main/nvidia_tao_deploy/cv/detectnet_v2/utils.py#L194.

Please set a lower dbscan_confidence_threshold and retry.
You can also check the value in https://github.com/NVIDIA/tao_deploy/blob/main/nvidia_tao_deploy/cv/detectnet_v2/utils.py#L383.

In tao deploy docker, the ‘aggregate_cov’ mode is set.
In the aggregate_cov mode, the final confidence of a detection is the sum of the confidences of all the candidate bboxes in a cluster.
https://github.com/NVIDIA/tao_deploy/blob/main/nvidia_tao_deploy/cv/detectnet_v2/postprocessor.py#L218.
You can login the docker and modify the code to set it to another mode ‘mean_cov’.
i.e.,
You can vim a new /usr/local/lib/python3.8/dist-packages/nvidia_tao_deploy/cv/detectnet_v2/postprocessor.py, then copy https://github.com/NVIDIA/tao_deploy/blob/main/nvidia_tao_deploy/cv/detectnet_v2/postprocessor.py to it.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.