How to generate int8 calilb table for trtexec engine generation

Description

I’m porting onnx model to tensorrt engine. I’ve tried onnx2trt and trtexec to generate fp32 and fp16 model. When it comes to int8, it seems onnx2trt does not support int8 quantization. After I set --int8 flag when converting onnx model to tensorrt, without providing the calib file, the inference result from the int8 engine differs a lot from the fp32 one.

So I’d like to try with some calibration image and then do the int8 quantization but no idea how to generate the calib file.

Any sample code or tools is welcome. Thanks.

Environment

Freshly installed Jetpack 4.4 DP

Please refer to below sample:

Thanks

Thank you so much for your reply.

Is there any sample code written in Python?

You can refer to below link for python sample:
https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-700/tensorrt-sample-support-guide/index.html#int8_caffe_mnist

Thanks

Got it, thanks.

Can you share the code to get calib file? Thank you so much.

Hi, Please refer to the below links to perform inference in INT8

Thanks!