Tensorrt Conversion from ONNX for keras-ocr models fails because of int32 input in intermediate layers

Description

I am using polygraphy to convert my onnx model to tensorrt. However it fails because it complains that -[E] In node 95 (notInvalidType): UNSUPPORTED_NODE: Found invalid input type of INT32
this is an input to a matmul layer in the model

Environment

TensorRT Version: 8.5.3.1
GPU Type: T4
Nvidia Driver Version: 510
CUDA Version: 11.6
CUDNN Version: 8.0
Operating System + Version: ubuntu 18.04
Python Version (if applicable): 3.9
TensorFlow Version (if applicable): 2.7
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

recognizer.onnx (33.6 MB)

Steps To Reproduce

run in terminal

polygraphy convert recognizer.onnx     --save-tactics replay.json --save-tactics replay.json -o 0.engine

Error trace:

[W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
[W] onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[W] onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[E] ModelImporter.cpp:726: While parsing node number 95 [MatMul -> "model_2/lambda_1/MatMul_1:0"]:
[E] ModelImporter.cpp:727: --- Begin node ---
[E] ModelImporter.cpp:728: input: "model_2/lambda_1/Reshape_8:0"
    input: "model_2/lambda_1/ones:0"
    output: "model_2/lambda_1/MatMul_1:0"
    name: "model_2/lambda_1/MatMul_1"
    op_type: "MatMul"
[E] ModelImporter.cpp:729: --- End node ---
[E] ModelImporter.cpp:732: ERROR: onnx2trt_utils.cpp:23 In function notInvalidType:
    [8] Found invalid input type of INT32
[E] In node 95 (notInvalidType): UNSUPPORTED_NODE: Found invalid input type of INT32
[!] Could not parse ONNX correctly

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!