Is Plugin API supported by onnx parser?


Custom layer implementation with the help of Plugin API is described in the developer guide below:

This guide only mentions about custom layer implementations for caffe and uff models. I wonder if the described workflow is valid for onnx models as well.

Also what is the proper workflow for implementing custom layers for an onnx model to be able to successfully parse the model with TensorRT?
I’ve done some research and found out that one way to implement custom layers could be directly modifying the onnx-tensorrt ( source code, building it and using the final .so file while building the TensorRT application so that unsopported layers could be parsed by the nvonnxparser.

Is this the go-to method while dealing with custom layers for onnx models or should I also do some stuff on the TensorRT’s Plugin API side?

Btw, I am working on Jetson devices, hence I am obliged to work with TensorRT 6 as of now, if that matters.



There should be better support for using custom plugins with ONNX models in the next release. Until then, yes I believe the most feasible way is to add your custom op to the onnx-tensorrt source:

This is being tracked here as well: (I see you just commented on that thread as well)


Thank you for your quick reply.

Let’s say I implemented the custom layers in onnx-tensorrt source code and built it (then used the updated library). After that, do I need to do something extra on the TensorRT side, maybe with Plugin API?

Or, after implementing the custom layer in onnx-tensorrt, is the workflow same as parsing an onnx model with fully supported layers?

I would expect that after editing the onnx-tensorrt source and re-building it, you should just be able to parse the model as usual. No plugin API code.

I have done something similar for modifying an existing op, but I haven’t done this myself for a new op yet. I don’t really know the limitations of this approach yet.

1 Like