Description
Hi,
I’m trying to create a custom TensorRT plugin with the eventual goal of supporting TensorFlow’s tf.nn.ctc_beam_search_decoder
function. For now all i am trying to do is create a dummy plugin that passes-through all inputs (so no operations) to test converting a TensorFlow model with ctc_beam_search_decoder
function to onnx, then to a tensorrt engine.
The Tensorflow → ONNX conversion produces the following error:
Tensorflow op [LprNet/decode_layer/CTCBeamSearchDecoder: CTCBeamSearchDecoder] is not supported
Which is expected.
The custom TensorRT plugin produces this error when returning from CtcBeamSearchCustom::getOutputDimensions()
:
IndexError: vector::_M_range_check: __n (which is 1) >= this->size() (which is 1)
The plugin is based on the example in the Developer Guide. With:
DimsExprs CtcBeamSearchCustom::getOutputDimensions (int32_t outputIndex, const DimsExprs *inputs, int32_t nbInputs, IExprBuilder &exprBuilder)
{
std::cout << "CtcBeamSearchCustom::getOutputDimensions()" << std::endl;
std::cout << "CtcBeamSearchCustom::getOutputDimensions(); nbDims:" << inputs->nbDims << std::endl;
switch (outputIndex)
{
case 0:
{
// First dimension of output is sum of input
// first dimensions.
DimsExprs output(inputs[0]);
output.d[0] =
exprBuilder.operation(DimensionOperation::kSUM,
*inputs[0].d[0], *inputs[1].d[0]);
return output;
}
case 1:
return inputs[0];
default:
throw std::invalid_argument("invalid output");
}
}
Is there an obvious error here? Or an example plugin with a similar goal?
Thanks
Environment
TensorRT Version: 7.2.2.3
TensorRT OSS Version: 21.02
GPU Type: TITAN RTX
Nvidia Driver Version: 460
CUDA Version: 11.2 (libnvinfer for 11.1)
CUDNN Version: 8.1
Operating System + Version: Ubuntu bionic
Python Version: 3.8
TensorFlow Version: 2.5.0
Baremetal or Container: Custom container
Relevant Files
ctcBeamSearchDecoderCustom.cpp (7.1 KB)
ctcBeamSearchDecoderCustom.h (3.4 KB)