Add Plugins to Network in C++ API with explicit batch dimension in TensorRT 7

(I’m using TensorRT 7 on Windows)

I want to add a plugin from the TensorRT library to my network using the C++ API.
The rest of my network is parsed from an ONNX file.

In previous versions of TensorRT, I would call initLibNvInferPlugins(),
parse my ONNX file, create the plugin (for the sake of example, let’s say it was the nmsPlugin, I would call createNmsPlugin())
and add it to my network using network->addPluginV2().

In TensorRT 7, I must use an explicit batch when parsing from an ONNX file (ok).
But now when I add the plugin to my network, the logger tells me
“PluginV2Layer must be V2Ext or V2IOExt or V2DynamicExt when there is no implicit batch dimension.”

createNmsPlugin() returns a pointer to IPluginV2, but I can see from the source code that it is derived from IPluginV2Ext.

Is there way I can add this plugin using the C++ API?

Can you provide the following information so we can better help?
Provide details on the platforms you are using:
o Linux distro and version
o GPU type
o Nvidia driver version
o CUDA version
o CUDNN version
o Python version [if using python]
o Tensorflow and PyTorch version
o TensorRT version

Could you please share your script and model file to reproduce the issue?



  • OS Windows 10, 64-bit
  • GPU Titan RTX
  • Driver Version 441.22
  • CUDA version cuda_10.2.89_441.22_win10
  • CUDNN version cudnn-10.2-windows10-x64-v7.6.5.32
  • TensorRT version TensorRT-

Here is a sketch in C++ to reproduce the problem (I could make complete file with all the boiler plate, but I think this shows where the issue is)

nvinfer1::INetworkDefinition* networkDefinition;
nvinfer1::IBuilder* builder;
nvinfer1::IBuilderConfig* builderConfig;
// ... code to create builder and config goes here ... //

// create network with EXPLICIT batch dimension
nvinfer1::NetworkDefinitionCreationFlags flags = 1U << static_cast<uint32_t>(nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
networkDefinition = builder->createNetworkV2(flags);

// ... code to build most of the network goes here ... //

nvinfer1::IPluginV2* pluginObj;
// ... code to create plugin goes here ... //

nvinfer1::ITensor** pluginInputs;
int nbInputs;
// ... code to setup plugin inputs goes here ... //

// add this plugin to our network
nvinfer1::IPluginV2Layer* /*nvinfer1::ILayer* */ pluginLayer = network.addPluginV2(pluginInputs, nbInputs, *pluginObj);

//... code to build engine goes here ... //

The engine will not build, here are the relevant messages from the logger:

“(Unnamed Layer* 0) [PluginV2]: PluginV2Layer must be V2Ext or V2IOExt or V2DynamicExt when there is no implicit batch dimension.”
“Layer (Unnamed Layer* 0) [PluginV2] failed validation”
“Network validation failed.”

If I do a similar thing without the explicit batch dimension, it works fine (but if want to parse from ONNX I NEED to have an explicit batch according to the documentation).

It seems like it could never work because there is is no addPluginV2Ext() method for INetworkDefition. Is this just impossible to do?


You would need to update the plugin.
The plugin that we ship is derived from IPluginV2, however, the OSS plugin is IPluginV2Ext, so you use TRT OSS plugin code instead.


Ah I see, thank you!