Description
I’m migrating my YoloV3 and YoloV4 code from TensorRT 6 to TensorRT 7 and getting some errors on INT8 calibration.
Both YoloV3 and YoloV4 can infer with FP32 correctly but I infer YoloV3 with INT8 will get the warning like the image below and get the wrong output.
(yolo-det is a cutom layer)
When I inferring YoloV4 on with INT8 will get the ERROR and crash like the following image.
I’m using IInt8EntropyCalibrator. Is there any update of INT8 calibration?
Environment
TensorRT Version : 7.1.3
GPU Type : Jetson TX2 iGPU
Nvidia Driver Version :
CUDA Version : 10.2
CUDNN Version : 8
Operating System + Version : Ubuntu 18.04