Parsing plugin in onnx-tensorrt parser

I have a plugin CDCGreedyDecoder with two inputs, the last layer shown in the attached picture (the last layer).
So the layer has two inputs and one output.

Onnx-TensorRT parser is used to parse the plugin to TensorRT in the file TensorRT/parsers/onnx/builtin_op_importers.cpp as follows.

    //const int nbInputs = node.input().size();
    //const int nbOutputs = node.output().size();
    //LOG_ERROR("nbInputs: " << nbInputs );
    //LOG_ERROR("nbOutputs: " << nbOutputs );
    nvinfer1::ITensor* tensorPtr = &convertToTensor(, ctx);
    int nbDims = tensorPtr->getDimensions().nbDims;
    ASSERT(nbDims >= 3 && nbDims <= 4 && "TensorRT only supports InstanceNormalization on 3D or 4D tensors!", ErrorCode::kUNSUPPORTED_NODE);    
    OnnxAttrs attrs(node, ctx);    

    // Populate instanceNormalization plugin properties.
    const std::string pluginName = "CTCGreedyDecoder_TRT";
    const std::string pluginVersion = "1";
    std::vector<nvinfer1::PluginField> f;
    // Create plugin from registry
    nvinfer1::IPluginV2* plugin = importPluginFromRegistry(ctx, pluginName, pluginVersion,, f);

    ASSERT(plugin != nullptr && "CTCGreedyDecoder plugin was not found in the plugin registry!", ErrorCode::kUNSUPPORTED_NODE);

    auto* layer = ctx->network()->addPluginV2(&tensorPtr, 1, *plugin);

Print out number of inputs and output and found out input/output numbers at CDCGreedyDecoder are as follows.
[TRT]: /home/xavier/TensorRT/parsers/onnx/builtin_op_importers.cpp:1567: nbInputs:2 [TRT]: /home/xavier/TensorRT/parsers/onnx/builtin_op_importers.cpp:1568: nbOutputs: 1

When converting onnx model to engine, number of inputs 1 is observed at CDCGreedyDecoder plugin’s method
void configurePlugin(const DynamicPluginTensorDesc* in, int nbInput, const DynamicPluginTensorDesc* out, int nbOutput) override.
It should be 2.

What is wrong in my implementation of onnx-tensorrt parser implementation?
DEFINE_BUILTIN_OP_IMPORTER(CTCGreedyDecoder) parser is shown above.


auto* layer = ctx->network()->addPluginV2(&tensorPtr, 1, *plugin);

It seems like you only pass one tensor to the plugin.
Here is a multi-input tensor for your reference:

std::vector<nvinfer1::ITensor*> tensors;
RETURN_ALL_OUTPUTS(ctx->network()->addPluginV2(, tensors.size(), *plugin));