Dynamic parameter "maxSeqLen" for addRNNv2


I have an OCR model named CRNN, which has an input with dynamic width.
So as you may know, after the CNN part of CRNN, our RNN(Bi-LSTM) module gets an uncertain SequenceLen, which will be set to -1. according to the error message, it said that the parameter named maxSeqLen of addRNNv2 should be >0.

lstm input shape: 4 [1 1 -1 512]
[11/26/2020-22:23:51] [E] [TRT] Parameter check failed at: …/builder/Network.cpp::addRNNCommon::572, condition: input.getDimensions().d[di.seqLen()] == maxSeqLen
[1] 2038 segmentation fault (core dumped) ./crnn_lstm -s

Therefore, I can’t convert a CRNN model with a dynamic width. How to solve this problem? or do I must to fix the INPUT_W? any cues would be highly appreciated.


TensorRT Version:
GPU Type: 2080TI
CUDA Version: 10.2
CUDNN Version: 7.6
Operating System + Version: Ubuntu18.04
Python Version (if applicable): 3.7

Hi @hantengfei013,
I am afraid, TRT does not support dynamic maxSeqLen.
What maxSeqLen mean is the maximum value of seqLen.
TRT do support dynamic sequence length, but you should set a maximum value for it, so that we will know how much memory we will allocate for the input tensor .


how to set the maxSeqLen when inserting an input with dynamic shape[-1, 32, -1, 3]? When I set maxSeqLen to the biggest number which maybe get, an unexpected error shown as follow.

[E] [TRT] Parameter check failed at: …/builder/Network.cpp::addRNNCommon::572, condition: input.getDimensions().d[di.seqLen()] == maxSeqLen`

if I want to set it to a dynamic value(-1),then

[E] [TRT] Parameter check failed at: …/builder/Network.cpp::addRNNCommon::570, condition: maxSeqLen > 0

Now what should I do? @AakankshaS@AakankshaS

Hi @hantengfei013,
If you are using C++ api RNNv2, you should use maxSeqLen as input shape and call IRNNv2Layer::setSequenceLengths to change sequence length at runtime
Another option for you is to use ILoop api, which supports -1 dynamic sequence length as input tensor shape.
And if you can use onnx model, our onnx parser should support converting dynamic sequence length to ILoop API as well.

The problem here is RNNv2 is depreciated API, and it choose a way to use IRNNv2Layer::setSequenceLengths to support dynamic sequence length instead of -1 in input tensor shape.