addPluginV2 using ITensor* const* as input

Hi all,

I am currently trying to code up my network definition using C++ API. I created my custom plugin and tried to add it to the network using the following code:

UpsampleBy2* upsample_plugin = new UpsampleBy2();
auto some_layer = network->addPluginV2(relu_5_2->getOutput(0), 1, *upsample_plugin);

But the compiler would throw me the following error:

argument of type "nvinfer1::ITensor *" is incompatible with parameter of type "nvinfer1::ITensor *const *"

I guess my first question is why addPluginV2 wants an ITensor * const* as input, while all the other functions like addConvolution, addActivation expect ITensor &.

My second question is how to resolve this issue. I tired to search this as a generic C++ problem on stackoverflow but didn’t seem to find much useful info. I tired to cast the pointer directly but the compiler doesn’t like that either.

Also, I am wondering if there’s any more up-to-date example of using addPluginV2? Up till now, I’ve been following the online documentation (https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#add_custom_layer_python) and the sample provided in TensorRT/samples/sampleUffSSD. But these resources are not always consistent. For instance, I don’t think addPlugin is even called in the sample project.

So I think I figured out the ITensor* const* part. It’s because some layers (e.g. concatenation) takes more than one input tensors.
Can’t delete the post. So I will just leave it here.

I have a same problem how can I fixed it out?

Hi,
Please refer to below links related custom plugin implementation and sample:

While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.

Thanks!