exprBuilder can only be used once, and only "constant" API can be used

Description

I created a multi-input ReflectPad custom plugin, but I got an error when calling getOutputDimensions(). I can’t use “operation” and can only use “constant” once.
my plugin code:

> DimsExprs DReflectPadPlugin::getOutputDimensions(
>     int outputIndex, const DimsExprs *inputs, int nbInputs, IExprBuilder &exprBuilder) {
>     assert(inputs[0].nbDims == 4);
>     assert(inputs[1].nbDims == 1 || inputs[1].nbDims == 4);
>     DimsExprs output;
>     output.nbDims = inputs[0].nbDims;
>     output.d[0] = inputs[0].d[0];
>     output.d[1] = inputs[0].d[1];
>     if (inputs[1].nbDims == 1) {
>         output.d[2] = exprBuilder.operation(DimensionOperation::kSUM, *inputs[0].d[2], 
>             *exprBuilder.operation(DimensionOperation::kPROD, *inputs[1].d[0], *two)); // Segmentation fault (core dumped)
>         output.d[3] = exprBuilder.operation(DimensionOperation::kSUM, *inputs[0].d[3], 
>             *exprBuilder.operation(DimensionOperation::kPROD, *inputs[1].d[0], *two));
>     } else {
>         output.d[2] = exprBuilder.operation(DimensionOperation::kSUM, *inputs[0].d[2], 
>             *exprBuilder.operation(DimensionOperation::kSUM, *inputs[1].d[2], *inputs[1].d[3]));
>         output.d[3] = exprBuilder.operation(DimensionOperation::kSUM, *inputs[0].d[3], 
>             *exprBuilder.operation(DimensionOperation::kSUM, *inputs[1].d[0], *inputs[1].d[1]));
>     }
>     return output; 
> }

When i try to change the code1:

> DimsExprs DReflectPadPlugin::getOutputDimensions(
>         ...
>         const auto *two = exprBuilder.constant(2); // no error
>         const auto *three = exprBuilder.constant(3); // Segmentation fault (core dumped)
>         ...

When i try to change the code2:

> DimsExprs DReflectPadPlugin::getOutputDimensions(
>         ...
>         auto four = exprBuilder.operation(DimensionOperation::kPROD, *aaa, *bbb);  // Segmentation fault (core dumped)
>         ...

Environment

TensorRT Version: v8.2.0.6
GPU Type: RTX 2080TI
Nvidia Driver Version:460.80
CUDA Version: 11.2
CUDNN Version:
Operating System + Version: ubuntu 18.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,
Please refer to below links related custom plugin implementation and sample:
https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html#onnx_packnet

While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.

Thanks!

thanks for your reply, I ran into this error exactly when I was using IPluginV2DynamicExt, but this plugin only work when i use Fixed input size

Hi,

You can take reference from the below link to build a custom layer.

Thank you.