Hi @Morganh , thanks for getting back to me.
for YOLOv4(CSPDarknet53)
If you trained a model with QAT enabled, the mAP is around 82% .
If you trained a model without QAT enabled, the mAP is 0 ?
Yes, but note that I only have this problem for tensorRT INT8 precision. In FP32/ FP16, both gets around 82%-84% accuracy.
You get this value while running tlt-evaluate against the trt int8 engine , right?
No. I am using a python script to load and run model, and do pre/post processing. I have verified I get the same results from scripts as tao evaluate with .tlt
model, but have not tested with tao evaluate + INT8 .engine
file.
Also used the following reference for pre/post processing:
- Pre-processing: Discrepancy between results from tlt-infer and trt engine - #6 by Morganh
- Post-Porcessing: https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/master/post_processor/nvdsinfer_custombboxparser_tlt.cpp#L107 (Post processor defined for YOLOv4 in https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/master/configs/yolov4_tlt/pgie_yolov4_tlt_config.txt)