Actually I never reproduce 0 mAP for int8. You can see some similar topic like TLT YOLOv4 (CSPDakrnet53) - TensorRT INT8 model gives wrong predictions (0 mAP) - Intelligent Video Analytics / TAO Toolkit - NVIDIA Developer Forums and Deepstream infrence gives no detection - Intelligent Video Analytics / TAO Toolkit - NVIDIA Developer Forums . That’s the reason why I am asking the detailed steps and some sample images.
Currently, we find that if end user gets the log of "iva.common.export.base_exporter: Generating a tensorfile with random tensor images. ” (like Deepstream infrence gives no detection - Intelligent Video Analytics / TAO Toolkit - NVIDIA Developer Forums) , then there are something wrong in the commands “cal_image_dir” , “batch_size” and “batches”.
I’m still digging out your log “Data file doesn’t exist. Pulling input dimensions from the network.” .
More, can you run inference well with the non-qat model and with-qat cal.bin ?