TensorRT 3: Using add_plugin() in Python to add Custom Layer

Hi,

I’m building a TensorRT graph using the python API, and I’m trying to add a custom layer written in C++. I’ve created a python wrapper for my custom layer, similarly to how it’s done in the “custom_layers” example of the TensorRT 3.0.2 python package.

This all works fine so far. I can import the custom layer package from python and create an instance of my custom layer plugin, as well as a PluginFactory instance for it.

While the “custom_layers” example uses the custom PluginFactory as a parameter to the tensorrt.utils.caffe_to_trt_engine() in order to instantiate a graph containing the custom layer, I’m trying to build the graph manually and thus use the network.add_plugin() function to add my custom layer to the graph.

builder = trt.infer.create_infer_builder(G_LOGGER)
network = builder.create_network()
data = network.add_input("data", trt.infer.DataType.FLOAT, (1, 16, 16))

test_plugin = my_plugins.TestPlugin()
test_layer = network.add_plugin(data, 1, test_plugin)

This is failing because add_plugin expects “nvinfer1::ITensor *const *’” as it’s first argument (i.e pointer to pointers), but “data” is (or can be converted to) an “ITensor *”.

TypeError: in method 'NetworkDefinition_add_plugin', argument 2 of type 'nvinfer1::ITensor *const *'

Is there any way to make this work?

I have met the same problem,and the add_concatenation method also has the same problem.

Until a better solution is found/available, I temporarily worked around this limitation by defining a custom wrapper function in my C++ plugin code.

IPluginLayer* add_mono_plugin(INetworkDefinition *network, ITensor * input, IPlugin &plugin)
{
    ITensor *const it = input;
    return network->addPlugin(&it, 1, plugin);
}

I’m basing my plugin on the “custom_layers/tensorrtplugins/” example, and SWIG will automatically make this function available from python. Thus, I can add my custom layer to the graph like this:

test_plugin = my_plugins.TestPlugin()
test_layer = my_plugins.add_mono_plugin(network, data, test_plugin)

As far as I can tell, it seems to work.

@lars
I am trying to do the same thing as you have done. I can create an instance of my class (which is derived from IPlugin) as shown below:

>>> from build.toy import Toy
>>> a = Toy()
Trying!

But I cannot create an instance of derived class from IPluginFactory as there is no constructor. I am not sure whether I need to define a constructor there or not.

The issue I am stuck at is I am unable to do

add_plugin

My sample code is given below

import tensorrt as trt
LOGGER = trt.Logger()
builder = trt.Builder(LOGGER)

from build.toy import ToyFactory as factory  <-- Derived from nvinfer1::IPluginFactory
from build.toy import Toy  <-- Derived from nvinfer1::IPlugin

network = builder.create_network()
input_l = network.add_input(name="in", dtype=trt.float32, shape=(1,1,1))

>>> a = network.add_plugin(inputs=[input_l], plugin=Toy())
Trying!
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
MemoryError: std::bad_alloc

My Toy() constructor is given below

Toy() { std::cout << "Trying!" <<std::endl; }

Can you share your code segment how did you create the plugin and what was your C++ class implementation like? Thanks.