Difference in metrics for inference and evaluation

I trained the model for about 200 epochs. During training, I had a test split on which the model was validated. After the end of the training, I chose several checkpoints with the best performance in terms of metrics and made an inference on the same test split. I just wanted to see the model work visually. When the inference ended, I was confused by some of the predictions and made my own evaluation based on the inference. And it turned out that the metrics are completely different. To calculate metrics based on inference, I used the TIDE library.

My evaluation:

TAO evaluation:

!tao detectnet_v2 evaluate -e $SPECS_DIR/trafficcamnet_finetune.txt\
                           -m $USER_EXPERIMENT_DIR/1f_compile_4/model.step-207788.tlt \
                           -k $KEY

I am also attaching spec files for inference and training (there are settings for evaluation)

Train:
trafficcamnet_finetune.txt (9.9 KB)
Inference:
tmp.txt (3.9 KB)

Did you ever try different IOU setting when you run tao detectnet_v2 evaluate and tao detectnet_v2 inference ?

You can refer to ssd_keras/average_precision_evaluator.py at 3ac9adaf3889f1020d74b0eeefea281d5e82f353 · pierluigiferrari/ssd_keras · GitHub

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.