This version of TensorRT does not support dynamic ReverseSequence length

Description

I build a model with Bidirectional LSTM, and I save the model to .pb format, then convert the pb file to onnx. When I use trtexec to convert the onnx to trt engine, it failed.
model code:

max_features = 20000  # Only consider the top 20k words
maxlen = 200  # Only consider the first 200 words of each movie review

# Input for variable-length sequences of integers
inputs = keras.Input(shape=(200,), dtype="int32")
# Embed each integer in a 128-dimensional vector
x = layers.Embedding(max_features, 128)(inputs)
# Add 2 bidirectional LSTMs
x = layers.Bidirectional(layers.LSTM(64, return_sequences=True))(x)
x = layers.Bidirectional(layers.LSTM(64))(x)
# Add a classifier
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs, outputs)
print(model.summary())

(x_train, y_train), (x_val, y_val) = keras.datasets.imdb.load_data(
    num_words=max_features
)
print(len(x_train), "Training sequences")
print(len(x_val), "Validation sequences")
x_train = keras.preprocessing.sequence.pad_sequences(x_train, maxlen=maxlen)
x_val = keras.preprocessing.sequence.pad_sequences(x_val, maxlen=maxlen)

model.compile("adam", "binary_crossentropy", metrics=["accuracy"])
model.fit(x_train, y_train, batch_size=32, epochs=2, validation_data=(x_val, y_val))

Environment

TensorRT Version: 8.0.16
GPU Type: V100:32GB
Nvidia Driver Version: 450.119.04
CUDA Version: 10.2
CUDNN Version: 8.0.2
Operating System + Version: ubuntu 16.04
Python Version (if applicable): 3.8.0
TensorFlow Version (if applicable): 2.3

Relevant Files

the log file
the onnx file

I make the input’shape fixed, but failed again.

inputs = keras.Input(shape=(200,), batch_size=32, dtype="int32")

onnx file
log file

Hi,

Regarding primary error you’re facing, currently dynamic dimensions not supported for the ReverseSequence operator.

We tried to build engine using onnx model you’ve shared, but facing following error.
TRT support tensor at most 2G elements.

[10/18/2021-16:55:07] [E] [TRT] ModelImporter.cpp:772: --- End node ---
[10/18/2021-16:55:07] [E] [TRT] ModelImporter.cpp:775: ERROR: ModelImporter.cpp:179 In function parseGraph:
[6] Invalid Node - StatefulPartitionedCall/model/bidirectional/forward_lstm/PartitionedCall/while_loop
[graphShapeAnalyzer.cpp::processCheck::582] Error Code 4: Internal Error ((Unnamed Layer* 60) [LoopOutput]_output: tensor volume exceeds (2^31)-1, dimensions are [2147483647,32,64])
[10/18/2021-16:55:07] [E] Failed to parse onnx file
[10/18/2021-16:55:07] [I] Finish parsing network model
[10/18/2021-16:55:07] [E] Parsing model failed
[10/18/2021-16:55:07] [E] Failed to create engine from model.
[10/18/2021-16:55:07] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8200] # /usr/src/tensorrt/bin/trtexec --onnx=bi_list_fixed.onnx --verbose --workspace=8000

Following similar issue may help you.

Thank you.

Dose trt8 surport 2GB elements now. Same error accured!

I wonder that is there realy has a extrem large node that bigger than 2GB,. Is there any possible that the model can not stop (endless loop for example) when parsing the onnx

here is the log of mine:

[02/14/2022-07:28:27] [E] [TRT] ModelImporter.cpp:776: --- End node ---
[02/14/2022-07:28:27] [E] [TRT] ModelImporter.cpp:779: ERROR: ModelImporter.cpp:166 In function parseGraph:
[6] Invalid Node - generic_loop_Loop__183
[graphShapeAnalyzer.cpp::processCheck::581] Error Code 4: Internal Error ((Unnamed Layer* 755) [LoopOutput]_output: tensor volume exceeds (2^31)-1, dimensions are [2147483647,1,160])
[graphShapeAnalyzer.cpp::processCheck::581] Error Code 4: Internal Error ((Unnamed Layer* 755) [LoopOutput]_output: tensor volume exceeds (2^31)-1, dimensions are [2147483647,1,160])
[02/14/2022-07:28:34] [E] Failed to parse onnx file
[02/14/2022-07:28:34] [I] Finish parsing network model
[02/14/2022-07:28:34] [E] Parsing model failed
[02/14/2022-07:28:34] [E] Failed to create engine from model.
[02/14/2022-07:28:34] [E] Engine set up failed