Converting .onnx model to int8

I am trying to convert an FP32 ONNX model to int8, which is used for detecting texts in images. However, it always went wrong at Int8Calibrator part. I am not sure where I got it wrong, could you tell me the error? Thanks!

Python Version: 3.8.8

My code looks like this: (2.0 KB)
The basic idea is:
Defining a function, build_engine, that does most of the work:
It sets up TensorRT, defining things like batch size, optimization profiles, and whether to use INT8 precision.
It then reads in the ONNX file and parses it to a format that TensorRT can use.
If the platform supports INT8 precision, it sets the network to use INT8 and sets up the calibrator.
It builds the TensorRT engine from the parsed ONNX file and saves it to disk.
Defining the main function that sets up the calibrator and calls build_engine to start the conversion process.

Here is some basic information of my model:
Model Inputs:
x: Shape: [-1, 3, -1, -1], Type: 1
Model Outputs:
save_infer_model/scale_0.tmp_1: Shape: [-1, 1, -1, -1], Type: 1

The error is:
[TRT] [E] 4: [standardEngineBuilder.cpp::initCalibrationParams::1460] Error Code 4: Internal Error (Calibration failure occurred with no scaling factors detected. This could be due to no int8 calibrator or insufficient custom scales for network layers. Please see int8 sample to setup calibration correctly.)
[TRT] [E] 2: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
Traceback (most recent call last):
File “”, line 61, in
File “”, line 57, in main
engine = build_engine(ONNX_FILE_PATH,Int8_calibrator)
File “”, line 42, in build_engine
TypeError: a bytes-like object is required, not ‘NoneType’

I think the second error is caused by the first one, which is the error of the calibrator.

In case if you need, I also append my model file here:
det.onnx (2.2 MB)
It is in the form of FP64, but it can be converted to FP32 automatically while running this code.