TLT (Tao) Evaluation Calculation

I am interested to know how the TLT evaluation step works in comparison with other accuracy calculations from different training platforms. I am trying to do a comparison of a TLT model against others and just need some context to better interpret the results. When running the evaluation step, I notice that there is a check for all ground-truth against the predictions. Is this the only check in accuracy? There are no results for recall or false positives, so is the accuracy only calculated based on the ground-truth against the predictions? Or are false positives taken into account?

Matching predictions to ground truth, class 1/5.: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 21236/21236 [00:01<00:00, 16191.39it/s]
Matching predictions to ground truth, class 2/5.: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 3792/3792 [00:00<00:00, 39779.90it/s]
Matching predictions to ground truth, class 3/5.: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 2870/2870 [00:00<00:00, 18457.39it/s]
Matching predictions to ground truth, class 4/5.: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 71006/71006 [00:04<00:00, 14740.07it/s]
Matching predictions to ground truth, class 5/5.: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████| 1212/1212 [00:00<00:00, 17172.99it/s]

According to the log, you are running detectnet_v2 network. In evaluation, true_positives and false positives are both computed. But in the result, it does not show recall or false positives. Currently, AP or mAP are shown.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.