Internal Error (Assertion min_ <= max_ failed.) calibrate model with IInt8MinMaxCalibrator C++ on tensorrt 8.6.11.4

Software Version
DRIVE OS 6.0.8.1

  • Software Version
    tensorRT 8.6.11.4
    CUDA 11.4
  • Target OS
    Linux
  • SDK Manager Version
    1.9.2.10884
  • Host Machine Version
    Nvidia orin platform Linux 20.04 Host installed with DRIVE OS DOCKER Containers
    Describe the bug
    When converting an ONNX model to TensorRT with int8 calibration, we observe the error
    nvinfer: 2: [quantizationbase.cpp]:dynamicrange:26] error code 2: internal error (assertion min_ <= max_ failed)

here is is the model detail
model properties
format onnx v8
producer pytorch 1.13.0
version 0
imports ai.onnx v17
graph torch_jit

input tensor
image : tensor :float32[1,7,3,1088,1920]
pre_bev: tensor: float32[10000,1,256]
use_prev_bev: tensor: float32[1]
can_bus: tensor: float32[18]
bev_pos: tensor: float32[1,256,100,100]
ref_2d: tensor: float32[1,10000,1,2]
reference_points_cam: tensor: float32[8,2700,1,8]
bev_mask: tensor float32[1,10000]
query_bev_mask: tensor: int64[8,2700]
Unfortunately, I cannot share the model right now and would like to extract a minimal example to repro this issue. Any clue on how to identify where (which layer) this error is coming from would be very helpful.

Dear @wangriying,
Please share the model to repro the issue?
Meanwhile can you share full log? Are you using trtexec tool or TRT APIs for INT8 conversion? Please see if it can be reproduced using trtexec tool

I’m fixed this issue

Dear @wangriying,
There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Glad to hear that. May I know what was the mistake and how you fixed the issue. Can you please share so that it may help others in developer community.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.