onnx2trt - Depthwise Cross Correlation

Hi,
I have some trouble generating a TensorRT engine from an onnx model.
My model has a layer type (see https://github.com/STVIR/pysot/blob/master/pysot/core/xcorr.py - xcorr_depthwise) that does not seem to be supported right now - so I have to implement a custom plugin layer.
Looking into the TensorRT dev guide, I only found instructions for uff and caffe but not for onnx.

I also had a look into the onnx github repo. The 6.0 branch contains two examples (like the split layer) which seem to be based on the old PluginV2. In the current master branch, they are both gone.

So - is there a guide or example available how to implement such an onnx plugin for the latest TensorRT release?

Hi,

You can refer to below plugin implementation in TRT 6.0. This plugin is based off the ONNX opset 6 definition, and is used in any ONNX model that uses this operation in TRT.
https://github.com/NVIDIA/TensorRT/tree/master/plugin/instanceNormalizationPlugin

Thanks

Hi,

thanks. This example helped me a lot. Kernel and plugin layer are now finished - but I have one more question on adding the plugin to the parser.

Here is some code I’ve taken from the InstanceNormalization plugin. I’ve adapted it to fit my plugin (which has no weights, but performs some operations on two inputs).

DEFINE_BUILTIN_OP_IMPORTER(MyPlugin){

    nvinfer1::ITensor* tensor_ptr = &convertToTensor(inputs.at(0), ctx);
    ASSERT(!isDynamic(tensor_ptr->getDimensions()) && "MyPlugin does not support dynamic inputs!", ErrorCode::kUNSUPPORTED_NODE);

    OnnxAttrs attrs(node);

    // Populate instanceNormalization plugin properties.
    const std::string pluginName = "MyPlugin_TRT";
    const std::string pluginVersion = "001";

    std::vector<nvinfer1::PluginField> f;

    // Create plugin from registry
    nvinfer1::IPluginV2* plugin = importPluginFromRegistry(ctx, pluginName, pluginVersion, node.name(), f);

    ASSERT(plugin != nullptr && "MyPlugin plugin was not found in the plugin registry!", ErrorCode::kINTERNAL_ERROR);
    RETURN_FIRST_OUTPUT(ctx->network()->addPluginV2(&tensor_ptr, 1, *plugin));
}

So what I need to do is adding a second input (tensor_ptr). I thought of something like this

...
// Get a tensor for each input
nvinfer1::ITensor* tensor_ptr_0 = &convertToTensor(inputs.at(0), ctx);
nvinfer1::ITensor* tensor_ptr_1 = &convertToTensor(inputs.at(1), ctx);
...
// How to pass both tensors to the addPluginV2() function?
RETURN_FIRST_OUTPUT(ctx->network()->addPluginV2(&tensor_ptr, 1, *plugin));

but I’m not sure how to pass both inputs to my plugin.

Hi,

To support multiple inputs just create a dynamic input plugin.
Please refer to below sample for your reference.
https://github.com/NVIDIA/TensorRT/tree/07ed9b57b1ff7c24664388e5564b17f7ce2873e5/plugin/proposalPlugin
https://github.com/NVIDIA/TensorRT/blob/07ed9b57b1ff7c24664388e5564b17f7ce2873e5/parsers/caffe/caffeParser/caffeParser.cpp

Thanks

@martin-91x were you able to create a siam tracker that runs with trt?