Migrating INT8 calibration from TensorRT 6 to TensorRT 7 in YoloV3 and YoloV4 failed

Description

I’m migrating my YoloV3 and YoloV4 code from TensorRT 6 to TensorRT 7 and getting some errors on INT8 calibration.

Both YoloV3 and YoloV4 can infer with FP32 correctly but I infer YoloV3 with INT8 will get the warning like the image below and get the wrong output.


(yolo-det is a cutom layer)

When I inferring YoloV4 on with INT8 will get the ERROR and crash like the following image.

I’m using IInt8EntropyCalibrator. Is there any update of INT8 calibration?

Environment

TensorRT Version : 7.1.3
GPU Type : Jetson TX2 iGPU
Nvidia Driver Version :
CUDA Version : 10.2
CUDNN Version : 8
Operating System + Version : Ubuntu 18.04

Hi @jackgao0323,
Can you please share the verbose logs along with the model and script.

Thanks!

Hi @AakankshaS,

This is the verbose of the YoloV4 mode run on INT8.

This is the last part of verbose of the YoloV3 mode run on INT8.




Will it be better if I save the logs as a file? Sorry I can’t provide our model and script.

I have solved the problem of running YoloV3 model in INT8 mode. The reason is I using the wrong size to do the calibration, but I still got the same problem on my YoloV4 model. Any update of this question?

Sorry, I’m using Jetson NX not Jetson TX2.