Dear experts:
I have an unsupported layer in my network model. This layer does not contain any params and has little computation intensity, so it is unnecessary to accelerate it by trt. My model relies on this layer, so is there any way to avoid trt to convert this layer during forward propagation?
Environment
TensorRT Version: 7.0.11 GPU Type: TITAN V Nvidia Driver Version: 430 CUDA Version: 10.2 CUDNN Version: 7.6.5 Operating System + Version:ubuntu18.04 LTS Python Version (if applicable): 3.6 TensorFlow Version (if applicable): PyTorch Version (if applicable): 1.1.0
SunilJB,
Thanks for the reply. A quick follow-up question is: what would happen if I do not handle the unsupported layer? Will TRT pop up an error, and stop converting the model? I haven’t give it a try, just want to know it in advance. @SunilJB
If TRT will leave this unsupported layer alone and let the model run, I think I do not need to mannually implement a custom plugin. Am I thinking it right?