INT8 calibration cache doesn't created

Description

Hi, I’m trying to build my centerNet model with INT8 engine.

I have built INT8 engine, but the calibration cache was not created.
I have ran it several times using tensorrt.Builder.build_engine() or tensorrt.Builder.build_cuda_engine(), but calibration cache was still not created.

I think write_calibration_cache() method wasn’t called, but I have no idea why this happen.
The API seems to be a little different between the version I use and the latest version of sensorrt. Are there some example for tensorrt 7.2.x?
I’d appreciate a lot if someone could give me advice.

And I have another question: int8 mode and calibrator can be set in tensort.Builder. Also, int8 mode and calibrator can be set in tensorrt.IBbuilderConfig. The difference between that two option seems to be tensorrt.Builder.build_engine() and tensorrt.Builder.build_cuda_engine(). What is the difference between the two ways to build an engine?

Environment

TensorRT Version: 7.2.2.1
GPU Type: Nvidia Titan Xp
Nvidia Driver Version:
CUDA Version: 11.1
CUDNN Version: 8.0.5
Operating System + Version: CentOS 7.9.2009
Python Version (if applicable): 3.8.5
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.9.0
Baremetal or Container (if container which image + tag): container (nvcr.io/nvidia/tensorrt:20.12-py3)

Hi, Please refer to the below links to perform inference in INT8

Thanks!

Thanks. But I have reviewed the sample you shared.
I am using python3 with onnx, Is there a sample that fits that environment?

Or, is there a sample about TensorRT 7.2.x?

Hi,

We have only the following samples. You can take reference and try for the ONNX model.
https://github.com/NVIDIA/TensorRT/blob/main/samples/python/int8_caffe_mnist/sample.py#L44

Thank you.