Software Version
DRIVE OS 6.0.8.1
- Software Version
tensorRT 8.6.11.4
CUDA 11.4 - Target OS
Linux - SDK Manager Version
1.9.2.10884 - Host Machine Version
Nvidia orin platform Linux 20.04 Host installed with DRIVE OS DOCKER Containers
Describe the bug
When converting an ONNX model to TensorRT with int8 calibration, we observe the error
nvinfer: 2: [quantizationbase.cpp]:dynamicrange:26] error code 2: internal error (assertion min_ <= max_ failed)
here is is the model detail
model properties
format onnx v8
producer pytorch 1.13.0
version 0
imports ai.onnx v17
graph torch_jit
input tensor
image : tensor :float32[1,7,3,1088,1920]
pre_bev: tensor: float32[10000,1,256]
use_prev_bev: tensor: float32[1]
can_bus: tensor: float32[18]
bev_pos: tensor: float32[1,256,100,100]
ref_2d: tensor: float32[1,10000,1,2]
reference_points_cam: tensor: float32[8,2700,1,8]
bev_mask: tensor float32[1,10000]
query_bev_mask: tensor: int64[8,2700]
Unfortunately, I cannot share the model right now and would like to extract a minimal example to repro this issue. Any clue on how to identify where (which layer) this error is coming from would be very helpful.