Calibration file problem between different versions of tensorrt?

I have a calibration file generated on Tensorrt 8.5, int8 model with this file gives good accuracy. When I reuse this file to generate int8 model on Tensorrt 8.2, performance of model is reduced much. What is happen?

Note: I calibrate int8 model on Tenssort8.2 but acc is not good. This is the reason why I reuse calibration file from Tensorrt8.5. My device is Jetson Xavier.

Hi, Please refer to the below links to perform inference in INT8


I have done calibration and inference. Are you bot?


The calibration cache data is portable across different devices as long as the calibration happens before layer fusion. Specifically, the calibration cache is portable when using the IInt8EntropyCalibrator2 or IInt8MinMaxCalibrator calibrators, or when QuantizationFlag::kCALIBRATE_BEFORE_FUSION is set. This can simplify the workflow, for example by building the calibration table on a machine with a discrete GPU and then reusing it on an embedded platform. Fusions are not guaranteed to be the same across platforms or devices, so calibrating after layer fusion may not result in a portable calibration cache. The calibration cache is in general, not portable across TensorRT releases.

Please refer to the developer guide for more information.

Thank you.

Thank you so much.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.