tensorflow to onnx to trt engine conversion failure for crnn model with bidirectional layer with ReverseSequence Op on Jetson Xavier with trt version 8.0.1.6
Environment
TensorRT Version: ‘8.0.1.6’ GPU Type: Jetson Xavier Nvidia Driver Version: CUDA Version: 4.6-b197 CUDNN Version: 4.6-b197 Operating System + Version: Ubuntu 18.04 Python Version (if applicable): Python 3.6.9 TensorFlow Version (if applicable): 1.15.2 PyTorch Version (if applicable): - Baremetal or Container (if container which image + tag): -
keras/tensorflow crnn model with bidirectional lstm layer
Steps To Reproduce
tensorrt engine conversion from onnx fails at ReverseSequence Op
[TensorRT] VERBOSE: lstm_10_back_reverse_seq_rev_seq [ReverseSequence] inputs: [lstm_10_back_transpose → (-1, 50, 128)[FLOAT]], [lstm_10_back_reverse_seq_expand0 → (-1)[INT32]],
[TensorRT] ERROR: 4: [network.cpp::validate::2411] Error Code 4: Internal Error (Network must have at least one output)
engine conversion failed
Even though Trt ‘8.0.1.6’ version supports ReverseSequence Op
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!
Currently ReverseSequence Op, which as we state in the support document, does not yet support dynamic length inputs (i.e., lengths of the sequences in a batch, sequence_lens, cannot be dynamic). Maybe we can try for a static batch.