INT8 calibration cache for trtexec fails

Description

Hi everyone,

I’m trying to run INT8 quantization of my model using trtexec, but I’m encountering errors during the calibration step. Below are the steps I followed and the corresponding error logs.

Steps performed

  1. Calibration cache creation using Polygraphy:
polygraphy convert model.onnx \
    --int8 \
    --data-loader-script ./data_loader.py \
    --calibration-cache int8_calib.cache \
    -o model.engine
  1. Engine generation using trtexec:
trtexec --onnx=model.onnx \
        --saveEngine=model.engine \
        --int8 \
        --calib=int8_calib.cache

Observed error

[11/02/2025-17:17:34] [I] [TRT] Calibration table does not match calibrator algorithm type.
[11/02/2025-17:17:38] [I] [TRT] Starting Calibration.
[11/02/2025-17:17:38] [E] Error[2]: [calibrator.cu::absTensorMax::141] Error Code 2: Internal Error (Assertion memory != nullptr failed. memory must be valid if nbElem != 0)
[11/02/2025-17:17:38] [E] Error[1]: [executionContext.cpp::executeInternal::1198] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
...
[11/02/2025-17:17:38] [E] Error[2]: [calibrator.cpp::calibrateEngine::1181] Error Code 2: Internal Error (Assertion context->executeV2(&bindings[0]) failed.)
[11/02/2025-17:17:38] [E] Engine could not be created from network
[11/02/2025-17:17:38] [E] Cuda failure: an illegal memory access was encountered
Aborted (core dumped)

Environment

TensorRT Version: 8.6.2.3
GPU Type: Jetson Orin NX 16 GB
CUDA Version: 12.2.140
CUDNN Version: 8.9.4.25
Operating System + Version: Ubuntu 22.04 Jammy Jellyfish 5.15.136-tegra
Python Version (if applicable): 3.10


Questions

  • How can I solve this issue?
  • Is there another way of generating the calibration cache?

Any insights or suggestions would be greatly appreciated.

Thanks in advance!

Hi @vic01 ,
I tried reproducing this issue with a similar workflow and it completed successfully without errors. However, my environment is different from yours.

Key observations:

  1. TensorRT version mismatch - You’re using TensorRT 8.6.2.3 while the latest is 10.x. This is likely the main issue. The calibration cache format and INT8 calibration algorithms have evolved significantly between these versions.
  2. Calibration table mismatch error - The error Calibration table does not match calibrator algorithm type suggests that:
  • Polygraphy (which likely uses a newer calibration algorithm) generated a cache that trtexec 8.6.2.3 can’t properly read

  • There might be an algorithm mismatch between the cache generation and consumption.

Suggested solutions:

  1. Upgrade TensorRT - If possible, upgrade to TensorRT 8.6.10+ or ideally 10.x on your Jetson Orin. JetPack 6.x should support newer TensorRT versions.

  2. Use consistent tools - Try using only trtexec for both calibration and engine building instead of mixing Polygraphy and trtexec:

    Create calibration cache with trtexec directly

    trtexec --onnx=model.onnx \
        --int8 \
        --saveEngine=model.engine \
        --calib=calibrator_script.py
    
    1. Check Polygraphy version compatibility - Ensure your Polygraphy version matches your TensorRT version.
    2. Memory access error - The illegal memory access error might indicate insufficient memory or a bug in TensorRT 8.6.2.3 that was fixed in later versions.

Since this appears to be Jetson/ARM-specific and related to an older TensorRT version, I’d recommend opening a case on the Jetson Forums with:

  • Your complete error logs

  • The ONNX model (if shareable) or at least model architecture details

  • Your calibration data loader script

  • The generated calibration cache file

The Jetson team will have more insight into platform-specific issues and can verify if this is a known bug in TensorRT 8.6.2.3 for Jetson.

Thank You.

1 Like

Hi @athkumar, thanks for your reply.

Regarding:


When I install Polygraphy and try to run it, it asks for TensorRT to be installed. For that reason, I create a virtual environment that includes TensorRT 8.6.2.3:

python -m virtualenv -p python quant --system-site-packages
source quant/bin/activate
python -m pip install colored polygraphy --extra-index-url https://pypi.ngc.nvidia.com

So, in theory, there shouldn’t be any compatibility issues if Polygraphy uses the TensorRT installation available in the same environment.
Or is that assumption incorrect?


Regarding the following suggestion:

How should the calibration script calibrator_script.py be structured or implemented?

Thanks again for your help!

1 Like

So, in theory, there shouldn’t be any compatibility issues if Polygraphy uses the TensorRT installation available in the same environment.
Or is that assumption incorrect?

Even though Polygraphy uses your installed TensorRT 8.6 at runtime, when you run pip install polygraphy, you’re getting the latest version of Polygraphy (likely 0.49.x), which is designed for newer TensorRT versions.

Version compatibility matrix:

TensorRT 8.6.x → Polygraphy 0.47.x
TensorRT 10.x → Polygraphy 0.49.x.

Reference: Release TensorRT OSS v10.6.0 · NVIDIA/TensorRT · GitHub

You can try:

#Check your current Polygraphy version

pip show polygraphy

# Downgrade to match your TensorRT version (or upgrade both)

pip install polygraphy==0.47.1

How should the calibration script calibrator_script.py be structured or implemented?

The issue is most likely a version mismatch. You can continue using your current Polygraphy + trtexec approach, just ensure the versions are compatible and let me know. If the problem persists, please post on the Jetson Forum with your complete logs, as this appears to be Jetson-specific rather than a general TensorRT issue.

Thank You.

1 Like

Hi @athkumar,

The issue was a version mismatch - the version you mentioned does not exist:

pip install polygraphy==0.47.1

I tried instead:

pip install polygraphy==0.48.1

and it worked.

Thanks a lot for the help!

Moreover, could you please provide some guidance or reference on how to implement calibrator_script.py? I’m interested in using trtexec exclusively, without mixing multiple technologies.

Thanks in advance!