Are int8 calibration cache files platform independent?

Description

If I create a int8 calibration cache on one machine (x86), can I use it on another machine to build the TRT engine (arm AGX)?

I am creating the calib file using polygraphy:

polygraphy convert model.onnx --int8 --data-loader-script ./data_loader.py --calibration-cache int8_calib.cache -o test.engine

Then building the TRT engine:

/usr/src/tensorrt/bin/trtexec --onnx=model.onnx --saveEngine=model.engine --int8 --calib=int8_calib.cache

Or will this affect performance?

Hi, Please refer to the below links to perform inference in INT8

Thanks!

Hi,

Also, please refer to the developer guide below, which may help you.

The calibration cache data is portable across different devices as long as the calibration happens before layer fusion. Specifically, the calibration cache is portable when using the IInt8EntropyCalibrator2 or IInt8MinMaxCalibrator calibrators, or when QuantizationFlag::kCALIBRATE_BEFORE_FUSION is set. This can simplify the workflow, for example by building the calibration table on a machine with a discrete GPU and then reusing it on an embedded platform. Fusions are not guaranteed to be the same across platforms or devices, so calibrating after layer fusion may not result in a portable calibration cache. The calibration cache is in general not portable across TensorRT releases.

Thank you.

Thank you for the links, they are very helpful. Is Polygraphy supported on a Jetson AGX with Jetpack 5.1.2?

Seems like I was able to run it, nevermind

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.