How to avoid trt converting unsupported layers?

Description

Dear experts:
I have an unsupported layer in my network model. This layer does not contain any params and has little computation intensity, so it is unnecessary to accelerate it by trt. My model relies on this layer, so is there any way to avoid trt to convert this layer during forward propagation?

Environment

TensorRT Version: 7.0.11
GPU Type: TITAN V
Nvidia Driver Version: 430
CUDA Version: 10.2
CUDNN Version: 7.6.5
Operating System + Version:ubuntu18.04 LTS
Python Version (if applicable): 3.6
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.1.0

Best approach in this case will be to create a custom plugin to replace the unsupported layer while generating TRT engine

Thanks

SunilJB,
Thanks for the reply. A quick follow-up question is: what would happen if I do not handle the unsupported layer? Will TRT pop up an error, and stop converting the model? I haven’t give it a try, just want to know it in advance. @SunilJB

If TRT will leave this unsupported layer alone and let the model run, I think I do not need to mannually implement a custom plugin. Am I thinking it right?

For Tensorflow we have TF-TRT if user wants to skip plugin layer creation effort.
I don’t think we have any such provision for Pytorch.

Thanks