YoloV4 int8 conversion issue


I’m tring to convert YoloV4 model to int8,
When I’m converting onnx to a fp32/fp16 engine I get bit-exec resault.
When I’m tring to convert to int8 I get really bad results.


TensorRT Version:
GPU Type: 3080
Nvidia Driver Version:
CUDA Version: 11.4
CUDNN Version: 8.1.1
Operating System + Version: ubuntu 20.04
Python Version (if applicable): 3.8
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.9
Baremetal or Container (if container which image + tag):

This is my calibration object.
calibrator.py (3.8 KB)

Hi, Please refer to the below links to perform inference in INT8