Onnx2TRT missing TensorListStack plugin

Description

Is any version of the TensorRT support TensorListStack plugin?

I’m trying to convert onnx model to trt.
The model was downloaded from tensorflow model zoo:
pre-trained mobilenetv2 downloaded from tensorflow model zoo models/tf2_detection_zoo.md at master · tensorflow/models · GitHub

During the conversion I got the error:

No importer registered for op: TensorListStack. Attempting to import as plugin.
[12/08/2021-12:04:53] [TRT] [I] Searching for plugin: TensorListStack, plugin_version: 1, plugin_namespace: 
ERROR:EngineBuilder:Failed to load ONNX file: (...)/onnx.onnx
ERROR:EngineBuilder:In node 9 (importFallbackPluginImporter): UNSUPPORTED_NODE: Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"

So I have found the github issue fixing the problem:
https://github.com/onnx/tensorflow-onnx/pull/1448/commits
So to support the plugin, I have updated the tf2onnx to 1.9.3 (Initially TensorRT was using some older version of tf2onnx).

Unfortunately after that I got a new error:

ERROR: [TRT]: [graph.cpp::computeInputExecutionUses::519] Error Code 9: Internal Error ((Unnamed Layer* 747) [Recurrence]: IRecurrenceLayer cannot be used to compute a shape tensor)

More info:

ERROR: [TRT]: [graph.cpp::computeInputExecutionUses::519] Error Code 9: Internal Error ((Unnamed Layer* 747) [Recurrence]: IRecurrenceLayer cannot be used to compute a shape tensor)
ERROR: [TRT]: ModelImporter.cpp:720: While parsing node number 430 [Loop -> "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/map/while_loop:0"]:
ERROR: [TRT]: ModelImporter.cpp:721: --- Begin node ---
ERROR: [TRT]: ModelImporter.cpp:722: input: "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/map/while_cast:0"
input: "__inference_Postprocessor_BatchMultiClassNonMaxSuppression_map_while_cond_11290_27730_Postprocessor/BatchMultiClassNonMaxSuppression/map/while/LogicalAnd:0"
input: "StatefulPartitionedCall/map/Const:0"
input: "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/strided_slice__1922:0"
output: "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/map/while_loop:0"
output: "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/map/while_loop:1"
output: "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/map/while_loop:2"
output: "detection_multiclass_scores"
output: "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/map/while_loop:4"
output: "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/map/while_loop:5"
output: "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/map/while_loop:6"
output: "detection_scores"
output: "detection_boxes"
name: "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/map/while_loop"
op_type: "Loop"
attribute {
  name: "body"
  g {
    node {
      input: "postprocessor_batchmulticlassnonmaxsuppression_map_while_postprocessor_batchmulticlassnonmaxsuppression_map_strided_slice_1_0:0"
      output: "cond___Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Less__63:0"
      name: "cond___Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Less__63"
      op_type: "Cast"
      attribute {
        name: "to"
        i: 1
        type: INT
      }
      domain: ""
    }
    node {
      input: "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/strided_slice__1922:0"
      output: "cond___Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Less_1__61:0"
      name: "cond___Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Less_1__61"
      op_type: "Cast"
      attribute {
        name: "to"
        i: 1
        type: INT
      }
      domain: ""
    }
    node {
      input: "postprocessor_batchmulticlassnonmaxsuppression_map_while_postprocessor_batchmulticlassnonmaxsuppression_map_while_loop_counter:0"
      output: "cond___Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Less_1__60:0"
      name: "cond___Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Less_1__60"
      op_type: "Cast"
      attribute {
        name: "to"
        i: 1
        type: INT
      }
      domain: ""
    }
    node {
      input: "cond___Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Less_1__60:0"
      input: "cond___Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Less_1__61:0"
      output: "cond___Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Less_1:0"
      name: "cond___Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Less_1"
      op_type: "Less"
      domain: ""
    }
    node {
      input: "postprocessor_batchmulticlassnonmaxsuppression_map_while_placeholder:0"
      input: "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/add/y:0"
      output: "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Identity_2:0"
      name: "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/add"
      op_type: "Add"
    }
    node {
      input: "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Identity_2:0"
      output: "cond___Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Less__62:0"
      name: "cond___Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Less__62"
      op_type: "Cast"
      attribute {
        name: "to"
        i: 1
        type: INT
      }
      domain: ""
    }
    node {
      input: "cond___Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Less__62:0"
      input: "cond___Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Less__63:0"
      output: "cond___Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Less:0"
      name: "cond___Postprocessor/BatchMultiClas
ERROR: [TRT]: ModelImporter.cpp:723: --- End node ---
ERROR: [TRT]: ModelImporter.cpp:726: ERROR: ModelImporter.cpp:179 In function parseGraph:
[6] Invalid Node - Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_2
[graph.cpp::computeInputExecutionUses::519] Error Code 9: Internal Error ((Unnamed Layer* 747) [Recurrence]: IRecurrenceLayer cannot be used to compute a shape tensor)

Docker container: Custom dGPU (Docker Containers — DeepStream 6.1.1 Release documentation)
TensorRT Version: 8.2.1.8
tf2onnx Version: 1.9.3

Hi,
Please refer to below links related custom plugin implementation and sample:

While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.

Thanks!