I need to convert some model from pytorch to tensorrt. The thing is that model has layers that pytorch and tensorrt do not support.
I wrote custom layer in pytorch and wrote custom plugin for tensorrt and tested them. And now I want to convert model using torch2trt.
But I can not understand what are the steps to do it.
Please refer to below links related custom plugin implementation and sample:
While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.
Yes. I converted with implicit batch size and when I do inference with batch size 1 it’s ok but when do with batch size > 1 then only first prediction is correct, but the others are zero. So I decided to move to dynamic_torch2trt but for this custom plugin has to be IPluginV2DynamicExt. And for now I’m trying to deal with it.