Internal Error while Creating TensorRT Engine from Quantized ONNX Model

I encountered an issue when attempting to create a TensorRT engine from quantized model on the NVIDIA Orin. The exact error log follows

[03/24/2025-13:13:31] [E] Error[10]: IBuilder::buildSerializedNetwork: Error Code 10: Internal Error (Could not find any implementation for node stages.1.downsample.proj.0.reparam_conv.weight + /stages.1/downsample/proj/proj.0/reparam_conv/_weight_quantizer/QuantizeLinear + /stages.1/downsample/proj/proj.0/reparam_conv/Conv.)
[03/24/2025-13:13:31] [E] Engine could not be created from network
[03/24/2025-13:13:31] [E] Building engine failed
[03/24/2025-13:13:31] [E] Failed to create engine from model or file.
[03/24/2025-13:13:31] [E] Engine set up failed

I’ve asked about a similar error previously, but in this case adjusting builterOptimizationLevel does not resolve the problem. (I’ve tested with 3, 4, 5)

Step To Reproduce

fastvit_ma36.silu.qat.zip (76.2 MB)
fastvit_ma36.silu.qat01.zip (80 MB)

  1. rename fastvit_ma36.silu.qat01.txt to fastvit_ma36.silu.qat.z01

  2. unzip fastvit_ma36.silu.qat.zip

  3. execute following command

/usr/src/tensorrt/bin/trtexec --verbose --onnx=fastvit_ma36.silu.qat.onnx --saveEngine=test.engine --builderOptimizationLevel=4 --fp16 --int8

Environment

Platform : Orin
Jetpack Version : 6.2+b77
TensorRT Version : 10.7
CUDA Version : 12.6.85
CUDNN Version : 9.3
Operating System + Version : Ubuntu 22.04 Jammy Jellyfish
Baremetal or Container (if container which image + tag): baremetal

Hi,
Here are some suggestions for the common issues:

1. Performance

Please run the below command before benchmarking deep learning use case:

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

2. Installation

Installation guide of deep learning frameworks on Jetson:

3. Tutorial

Startup deep learning tutorial:

4. Report issue

If these suggestions don’t help and you want to report an issue to us, please attach the model, command/step, and the customized app (if any) with us to reproduce locally.

Thanks!

Hi,

We want to test it locally but somehow are not able to unzip the file.
Would you mind sharing through some online drive so you don’t need to divide into two files?

Thanks.

Sure, here’s the model I used.

Thanks.

Hi,

Thanks for sharing the model.
We can reproduce the same issue in our environment as well.

We need to check this issue with our internal team.
Will provide more info to you later.

Thanks.