Inference Confidence varies (0.1 to 0.99) in YOLOv4 Compared to YOLOv3

I ran several YOLOv4 model training. The training AP per class and overall MAP looks good.
However, during inference on images for the ‘Person’ class, the confidence scores vary from 0.1 to 0.99. With YOLOv3 and the same dataset, we consistently achieve better detections above 0.90 and lower false positives.

Due to the issue of varying confidence scores, we are not able to configure the confidence threshold according to good detections and minimize false positives.

I have also uploaded the spec file that we used. Could you please suggest any changes
D18_yolov4_tamospec_nov10_config_3_22_tf_train_clssweight.txt (3.3 KB)

we are using ac77f8d117ed docker image for Tao Toolkit 3.1.24.

May I know that which TLT/TAO docker did you use? Could you share the name of it? Thanks.

we are using Tao Toolkit 3.22.05

For YOLOv4, please use TAO 4.0 or above version of docker. See more in Very high loss while training TAO yolov4 - #5 by Morganh, there is an improvement since TAO 4.0 version.

We have been working on YOLO v4 for a few months now, and we have already discussed this topic

please check - Yolo v4 Giving 0.0 AP for less images class

Our loss values, validation loss, and AP score per class are not problematic. However, despite the convergence of loss and validation loss, we are encountering low confidence scores for detected classes in our inference results.

From that topic, there are also good results when you “train it on 4.0.1”.
Could you please use the result model to run inference in 4.0.1 as well?

we already check infer with 4.0.1 also , same low confidence score for detections.

This behavior only occurs in the “Person” class, right?

no , we are facing this issue for most of classes.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Could you download KITTI dataset mentioned in the notebook and train against it to check if there is the same behavior? Thanks.

More, what is the mAP when you train with YOLOv4? And for your own dataset, did you use kmeans to generate anchor_shapes based on your dataset? Also, please try to enlarge “loss_class_weights” a bit and retrain.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.