Write converter for torch2trt for custom layer in pytorch and tensorrt

I need to convert some model from pytorch to tensorrt. The thing is that model has layers that pytorch and tensorrt do not support.
I wrote custom layer in pytorch and wrote custom plugin for tensorrt and tested them. And now I want to convert model using torch2trt.

But I can not understand what are the steps to do it.

Should I write custom plugin like these ones: torch2trt/torch2trt/plugins at master · NVIDIA-AI-IOT/torch2trt · GitHub
Or should I write custom converter like these ones: torch2trt/torch2trt/converters at master · NVIDIA-AI-IOT/torch2trt · GitHub

Or I need first to write custom plugin and after that write converter?

I can not find information about this in the internet. Is there some guide? Or may be someone has experience and can point me in write direction?

Please refer to below links related custom plugin implementation and sample:

While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.


I’ve read this. This is for ONNX not torch2trt.


The problem might be how to export the op to onnx model.
After port model to onnx, then we can see how to convert that to TRT.

Thank you.

Thanks. I used torch2trt to convert not ONNX


Were you able to conver to TRT successfully ?

Yes. I converted with implicit batch size and when I do inference with batch size 1 it’s ok but when do with batch size > 1 then only first prediction is correct, but the others are zero. So I decided to move to dynamic_torch2trt but for this custom plugin has to be IPluginV2DynamicExt. And for now I’m trying to deal with it.

1 Like