Yolov4 - evaluation differences

• Hardware (Xavier)
• Network Type (Yolo_v4)
• TLT Version v3.22.05-tf1.15.5-py3
• Training spec file
yolo_v4_cspdarknet19.txt (2.9 KB)

When evaluating Yolov4 model on validation dataset, the mAP is significantly different than the score obtained during validation. Using other architectures like faster-rcnn and detectnetv2 evaluation output of other architectures seems to be consistent with validation during training. Do you know what might have caused that behaviour with Yolov4?

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.

It does not make sense. Could you double check? Can you share the full training log and evaluation log?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.