Myelin error when load ONNX model, Assertion `false && "Invalid size written"' failed

Description

We’re preparing to use Orin with TensorRT8.3 for better performance, but it failed when creating a Builder. Error occurred in myelin which is close-sourced, we’ve gotten stuck here for days and had no clue where to start debugging, so ask for your help:)

Environment

**TensorRT Version tensorrt 8.3:
**GPU Type Orin:
**Nvidia Driver Version 515:
**CUDA Version 11.4:
**CUDNN Version 8.2:

Relevant Files

Steps To Reproduce

trtexec --onnx=swin_tiny_patch4_window7_224_opt.onnx --workspace=4096 --int8 --verbose --saveEngine=swin_tiny_patch4_window7_224_opt.trt

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

hello,onnx model downloads links is

trtexec run log is
swin-t-error.log - Google 云端硬盘

Hi,

We could successfully build TensorRT engine on latest the latest TensorRT version 8.4 GA. Please upgrade your TensorRT version.

&&&& PASSED TensorRT.trtexec [TensorRT v8401] # trtexec --onnx=swin_tiny_patch4_window7_224_opt.onnx --int8 --verbose --workspace=4096

Thank you.

1 Like