How do you accelerate a particular layer by TensorRT?

Hey team,

I know that PReLU is not currently supported by TensorRT, but I wonder if there is a way to avoid these layers and only accelerate the layers supported by TensorRT?
Or how to use the Plugin Layer to integrate the custom Layer? Any examples?

I was wondering the exact same thing and would be happy to find an answer!

TensorRT supports PReLU through our plugins in NvInferPlugin.h.

You also can look at samplePlugin for an example on how to write plugins.