Hello!
I’m trying to manually construct TensorRT engine following the SampleCharRNN example from the Developer Guide and I don’t fully understand how to properly add an LSTM layer.
According to the docs for nvinfer1::INetworkDefinition::addRNN in C++ the layout for the LSTM’s input tensor should be {1, T, N, C}, where
T  The number of time sequences to be executed.
N  The number of minibatches for each time sequence.
C  The size of data to be submitted to the RNN.
 How should I specify the network input shape if it contains an LSTM layer?
In the SampleCharRNN example the input is specified as
auto data = network>addInput(INPUT_BLOB_NAME, DataType::kFLOAT, DimsCHW{ SEQ_SIZE, BATCH_SIZE, DATA_SIZE}); // SEQ_SIZE = 1, BATCH_SIZE = 1 and DATA_SIZE = 512
Let’s say that I have 8 sequences each of length 16 where each element in a sequence is a feature vector of 64 elements (input data dimension is (8, 16, 64))
What would the input shape be in this case? DimsCHW{ 16, 8, 64}?

Could you please clarify these T, N and C params a bit more? Again, if I have 8 sequences of 16 timesteps and each timestep of size 64 which values would T, N and C have in this case? Do I understand correctly that C = 64, N = 8, T = 16? Or T is supposed to be 8 in this case (as the total number of sequences)?

What is the relation between the batch size specified via setMaxBatchSize(int batchSize) for the builder and the number of minibatches for each time sequence in N?
Thanks in advance!