NVES
December 11, 2018, 9:45pm
2
Hello,
To import pytorch to tensorrt: you’d save your pytorch model as an ONNX file, then import your model into TRT.
via Python: https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#import_onnx_python
Via C++: Developer Guide :: NVIDIA Deep Learning TensorRT Documentation
Take a look at this sample: GitHub - modricwang/Pytorch-Model-to-TensorRT
Regarding custom layer: Users can extend TensorRT functionalities by implementing custom layers using the IPluginV2 class for the C++ and Python API. Please reference https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#extending
Hi,
TensorRT 5 GA only provides the examples of IPluginV2 for caffe/tensorflow. But how to implement custom layers for onnx models?
I need some examples, thanks!
Hi,I also find this problem too.How can i implement custom layers for onnx models,I just need to add PRelu layer.
Hey qwe258sj,
Were you able to register your custom Plugin through nvonnxparser?
Hi ebraheem,
I don’t find it yet.
Hi, for onnx models,I need a example using the c++ IPluginV2 class for add a layer …
cliu13
May 13, 2019, 12:56am
8
@NVES , could you please answer above questions? I think add custom layer in onnx-tensorrt is a common demand.