Fail onnx to engine conversion: ConvLSTM2D

Description

I am developing an AI model which need to use tensorflow ConvLSTM2D layers. ONNX inference works fine but it fails when I try to convert it to a .engine file.
The issue is the following:

ERROR: [TRT]: 2: [makeReshapeExplicit.cpp::expandConvolution::131] Error Code 2: Internal Error (Myelin support for convolution with 2 inputs will be added by TRT-12816.)

I have two doubts. First, is this layer included in yet in newer tensorrt versions? (or at least planned for soon). Second, if negative, what could be an equivalent layer to be converted to tensorrt?

I am performing conversion on a jetson orin but I don’t think that it should be relevant.

Environment

TensorRT Version: 8.5.5.2
GPU Type: Jetson AGX Orin
Nvidia Driver Version:
CUDA Arch BIN: 8.7
Jetpack: 5.1.2
Operating System + Version: Ubuntu 20.04.5 LTS
Python Version (if applicable): 3.10.16
TensorFlow Version (if applicable): 2.17
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Steps To Reproduce

import tensorflow as tf
import tf2onnx
import onnx

model = tf.keras.models.Sequential([
tf.keras.layers.ConvLSTM2D(filters=16,
kernel_size=(3, 3),
input_shape=(None, 64, 64, 1), # (time, height, width, channels)
padding=‘same’,
return_sequences=True,
activation=‘relu’),
tf.keras.layers.BatchNormalization(),

# Reduces (time, height, width, channels) -> (features)
tf.keras.layers.GlobalAveragePooling3D(),

tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')  # Example for 10-class classification

])

input_tensor = tf.keras.Input(shape=(None, 64, 64, 1))
output_tensor = model(input_tensor)
model = tf.keras.Model(inputs=input_tensor, outputs=output_tensor)

Convert the model to ONNX format

spec = (tf.TensorSpec((None, None, 64, 64, 1), tf.float32, name=“input”),)
output_path = “ExampleModel.onnx”
model_proto, _ = tf2onnx.convert.from_keras(model, input_signature=spec, output_path=output_path, opset=13)

!trtexec --onnx=ExampleModel.onnx --saveEngine=ExampleModel.engine

ERROR:
WARNING: [TRT]: onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU
ERROR: [TRT]: 2: [makeReshapeExplicit.cpp::expandConvolution::131] Error Code 2: Internal Error (Myelin support for convolution with 2 inputs will be added by TRT-12816.)
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.

Hi @jesus.parejo please can you open an issue on the TensorRT Github? GitHub · Where software is built. That’s where the TensorRT team hang out, so best place to get help with technical issues.

Thanks and best wishes,

Sophie

Ok, I will try. Thanks you!