"Error Code 3: API Usage Error (Parameter check failed, condition: isQuantized(dataType)."

Description

I used TRT modelopt to quantize an ONNX model and am trying ot run it on a Jeston Orin nano super using ONNX Runtime with TRT as the execution provider, but encounter errors of this type for biases: IDequantizeLayer::setPrecision: Error Code 3: API Usage Error (Parameter check failed, condition: isQuantized(dataType). A DequantizeLayer can only run in DataType::kINT8, DataType::kFP8 or DataType::kINT4precision).

I have tried both dq_only true and false, both produce the same errors

What can be done to fix these errors, since, from what I found, biases aren’t usually quantized, but the problem seems to be that they aren’t?

Environment

dustynv/onnxruntime:1.22-r36.4.0-cu128-24.04 docker container image on jetson.

Relevant Files

I have linked the full log file made during engine build and inference.

optimisation_log_qdq_qd_only_optim_level_3_modelopt_1.log (394.4 KB)