Custom layer implementation with the help of Plugin API is described in the developer guide below:
This guide only mentions about custom layer implementations for caffe and uff models. I wonder if the described workflow is valid for onnx models as well.
Also what is the proper workflow for implementing custom layers for an onnx model to be able to successfully parse the model with TensorRT?
I’ve done some research and found out that one way to implement custom layers could be directly modifying the onnx-tensorrt (https://github.com/onnx/onnx-tensorrt) source code, building it and using the final .so file while building the TensorRT application so that unsopported layers could be parsed by the nvonnxparser.
Is this the go-to method while dealing with custom layers for onnx models or should I also do some stuff on the TensorRT’s Plugin API side?
Btw, I am working on Jetson devices, hence I am obliged to work with TensorRT 6 as of now, if that matters.