Dose TensorRT not support moving the batch dim?

Collecting environment information…
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130

OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
CMake version: version 3.5.1

Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration: GPU 0: GeForce GTX 1080 Ti
Nvidia driver version: 410.104
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.5.1

Versions of relevant libraries:
[pip3] numpy==1.16.3
[pip3] opencv-python==3.4.3.18
[pip3] Pillow==6.0.0
[pip3] tensorrt==5.1.5.0
[pip3] torch==1.1.0
[pip3] torchvision==0.2.2.post3
[conda] Could not collect

Describe the problem
When I parsed from ONNX file, I got the error message as below.
‘’
While parsing node number 21 [Transpose]:
ERROR: builtin_op_importers.cpp:1928 In function importTranspose:
[8] Assertion failed: perm.order[BATCH_DIM] == BATCH_DIM
‘’

It comes from this line in the ONNX.
‘’
%64 = Transposeperm = [2, 0, 1]
‘’

This operation moving the batch dim. Is it not support for TensorRT?

And, Is there any example for RNN(LSTM) layer with pytorch??

Thanks
Best,

additionally, my model is composed of several CNN and Reshape and two RNN(LSTM) layers. See below.

the input shape of pytorch’s LSTM api is . So my network includes the reshape layer(transpose) to use LSTM from pytorch.

In this case, Should I make trt network from scratch with TRT api like <addInput, addCNN…>?

I met the same problem, have you resolved it yet?

i also have the problem, any solution?

我也遇到了类似的问题,同求解决方案
I met the same problem,How to solve it?