ONNX to TRT conversion failed

Hello, I’m trying to deploy an onnx model to my windows pc, but the conversion to trt engine failed as such:

trtexec.exe --onnx=.\fangheng0424_super2_wavelet_gt_avgpool_epoch_100.onnx --saveEngine=out.trt --iterations=1 --device=0 --verbose
...
[04/29/2024-20:57:54] [E] [TRT] ModelImporter.cpp:828: While parsing node number 123 [AveragePool -> "input.100"]:
[04/29/2024-20:57:54] [E] [TRT] ModelImporter.cpp:831: --- Begin node ---
input: "onnx::AveragePool_282"
output: "input.100"
op_type: "AveragePool"
attribute {
  name: "ceil_mode"
  i: 0
  type: INT
}
attribute {
  name: "kernel_shape"
  ints: 512
  ints: 512
  type: INTS
}
attribute {
  name: "pads"
  ints: 0
  ints: 0
  ints: 0
  type: INTS
}
attribute {
  name: "strides"
  ints: 512
  ints: 512
  type: INTS
}

[04/29/2024-20:57:54] [E] [TRT] ModelImporter.cpp:832: --- End node ---
[04/29/2024-20:57:54] [E] [TRT] ModelImporter.cpp:836: ERROR: ModelImporter.cpp:176 In function parseNode:
[1] Exception occurred in - AveragePool_123
Internal Error!
[04/29/2024-20:57:54] [E] Failed to parse onnx file
[04/29/2024-20:57:54] [I] Finished parsing network model. Parse time: 1.37159
[04/29/2024-20:57:54] [E] Parsing model failed
[04/29/2024-20:57:54] [E] Failed to create engine from model or file.
[04/29/2024-20:57:54] [E] Engine set up failed

Environment

TensorRT Version: 10.0.1.6
GPU Type: RTX3060 laptop 6GB
Nvidia Driver Version: 531.68
CUDA Version: 11.7
CUDNN Version:
Operating System + Version: windows 10

Relevant Files

model file:

full verbose log:
log.txt (387.6 KB)

Steps To Reproduce

trtexec.exe --onnx=.\fangheng0424_super2_wavelet_gt_avgpool_epoch_100.onnx --saveEngine=out.trt --iterations=1 --device=0 --verbose

check I did

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).

above snippet runs without error

checking on this.

looking forward to your reply, thanks.

found the reason.
Pooling kernel size is wrongly set to 512x512, which is too large.