Does tensor-rt support input with (seqence, batch, embedding) format

The input of the CV network are (N, C, H, W) format. Tensor-RT works well when batch size in 0th dimention,
But in same NLP networks, the input format are (seqence, batch, embedding).
So my question:

  1. Does Tensor-RT support format which the batch size at 1st dimension?
  2. It would be better to provide an example network about definition and execution

Hi @cwf_21th,

Hope following links will help you.
Following article tells about how to use BERT in Tensorrt, Input and output dimensions for FC layers.
https://developer.nvidia.com/blog/nlu-with-tensorrt-bert/

Thank you.