I’m trying to get a simple RCNN to run in TensorRT 7.0 (or 6.0, tried both) in C++ using ONNX parser.
I want to make sure if there is an out of the box way of doing this using any of the frameworks or writing a custom layer in C++ is the way to go?
I’ve tried a lot of combinations - all of them working in the frameworks but not parsing in TRT, lets take one for an example.
Using Tensorflow 1.14 (tried also with 1.15, 2.0, 2.1) and Keras 2.3.1 (and tf.keras for TF 2.0+) for training and ONNX/keras2onnx 1.6 for exporting with opset anywhere from 8 to 11, this example is failing to parse on the TRT C++ side: https://keras.io/examples/conv_lstm/
You can break down the above architecture from the link to the most basic one, with one ConvLSTM2D layer or even use a Reshape and use a vanilla LSTM layer (parser failing with [Transpose]: ERROR: builtin_op_importers.cpp:1928 In function importTranspose:  Assertion failed: perm.order[BATCH_DIM] == BATCH_DIM) all fails to parse an exported and validated/checked ONNX file with a few different errors. Changing opset at ONNX export may change the error reported, but it still fails, the same applies for changing versions of TF, ONNX, keras2onnx, also tried changing data format (channel first/last) but it has no effect on the errors (as expected).
This is true for using a functional or sequential style to create the model in keras with explicitly specified batch. I’ve also tried the same architecture in PyTorch (1.2, 1.3) and Tensorflow alone with similar results.
Thank you for you time.