ONNX Attributes aren't parsed with Custom Plugin


When using the ONNX parser and plugin creator, the number of fields is zero, however when viewing the graph in NETRON, they are clearly there and populated correctly.


TensorRT Version:
GPU Type: RTX 3090
Nvidia Driver Version: 455.23.05
CUDA Version: 11.1
CUDNN Version:
ONNX Version: 1.6.0 (Opset 11)
Operating System + Version: ubuntu1804
Python Version (if applicable): 3.8
TensorFlow Version (if applicable): N/A
PyTorch Version (if applicable): 1.7.0
Baremetal or Container (if container which image + tag): N/A

Relevant Files

you can find the export code in nnet_training/utilities/onnx_export.py

all the plugins are found in runtime/cerberus_net/trt_plugins, in particular I am trying to get the correlation layer working correctly, but this will apply to the others (however some have string attributes which don’t seem to export properly anyway).

I can upload an onnx graph to google drive if needed.

I feel I was reasonably close at some point to get the entire network working, just needed to implement ScatterND, but its now getting worse the more I try to fix it lol. Would be good if I did since the Correlation, Grid_sampler and ScatterND are resonably frequently used operations, once validated I can make a PR to TensorRT repo.

Steps To Reproduce

Building is the usual affair, there’s some other supplementary files such as labels and test images that you need, however you could probably just comment out those lines of code.

Reproduction-wise, I just print out the number of fields it thinks it has, which it says zero…

Hi @frenzi,
Please refer to the below link, this might help you here.


That sample seems to be modifying an existing onnx graph rather than building it with the correct attributes in the first place. Which it seems like I am when inspecting it with NETRON. Here’s the code I use to define the operators and a screenshot of NETRON. The model itself can be found here: https://drive.google.com/file/d/1rN90pkVZcFqOQ0vxtRswB0R2U1m9dV69/view?usp=sharing

@parse_args('v', 'v', 'i', 'i', 'i', 'i', 'i', 'i')
def correlation_op(g, input1, input2, pad_size, kernel_size,
                   max_displacement, stride1, stride2, corr_multiply):
    return g.op("cerberus::correlation", input1, input2, pad_size_i=pad_size,
                kernel_size_i=kernel_size, max_displacement_i=max_displacement,
                stride1_i=stride1, stride2_i=stride2, corr_multiply_i=corr_multiply)

@parse_args('v', 'v', 's', 's', 'b')
def grid_sample_op(g, input1, input2, mode, padding_mode, align_corners):
    return g.op("torch::grid_sampler", input1, input2, mode_s=mode,
                padding_mode_s=padding_mode, align_corners_i=align_corners)

def foo_export():
    torch.onnx.register_custom_op_symbolic('cerberus::correlation', correlation_op, 11)
    torch.onnx.register_custom_op_symbolic('::grid_sampler', grid_sample_op, 11)
        model, (dummy_input_1, dummy_input_2),

It turns out I first based my plugin creator off another developer’s which was incorrect and I had to add the attributes in the constructor.

    mPluginAttributes.emplace_back(nvinfer1::PluginField("pad_size", nullptr, nvinfer1::PluginFieldType::kINT32, 1));
    mPluginAttributes.emplace_back(nvinfer1::PluginField("kernel_size", nullptr, nvinfer1::PluginFieldType::kINT32, 1));
    mPluginAttributes.emplace_back(nvinfer1::PluginField("max_displacement", nullptr, nvinfer1::PluginFieldType::kINT32, 1));
    mPluginAttributes.emplace_back(nvinfer1::PluginField("stride1", nullptr, nvinfer1::PluginFieldType::kINT32, 1));
    mPluginAttributes.emplace_back(nvinfer1::PluginField("stride2", nullptr, nvinfer1::PluginFieldType::kINT32, 1));
    mPluginAttributes.emplace_back(nvinfer1::PluginField("corr_multiply", nullptr, nvinfer1::PluginFieldType::kINT32, 1));

    mFC.nbFields = mPluginAttributes.size();
    mFC.fields = mPluginAttributes.data();

I now have functioning plugins for the grid_sampler, correlation and ScatterND operations.

1 Like