Import pytorch model in TensorRT and add custom plugin layer

I have two questions

  1. I want to import pytorch model into tensorrt, how can I do that?
  2. I want to add custom layers (plugins) in the model that I want to import, layers include upsampling layer which has to be added between layers or in the network, how can i do that after importing model from pytorch

Hello,

To import pytorch to tensorrt: you’d save your pytorch model as an ONNX file, then import your model into TRT.

via Python: https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#import_onnx_python
Via C++: Developer Guide :: NVIDIA Deep Learning TensorRT Documentation

Take a look at this sample: GitHub - modricwang/Pytorch-Model-to-TensorRT

Regarding custom layer: Users can extend TensorRT functionalities by implementing custom layers using the IPluginV2 class for the C++ and Python API. Please reference https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#extending

Hi,

TensorRT 5 GA only provides the examples of IPluginV2 for caffe/tensorflow. But how to implement custom layers for onnx models?

I need some examples, thanks!

Hi,I also find this problem too.How can i implement custom layers for onnx models,I just need to add PRelu layer.

Hey qwe258sj,
Were you able to register your custom Plugin through nvonnxparser?

Hi ebraheem,
I don’t find it yet.

Hi, for onnx models,I need a example using the c++ IPluginV2 class for add a layer …

@NVES, could you please answer above questions? I think add custom layer in onnx-tensorrt is a common demand.