Padding and speedup of tensorrt inference

I have tensorflow model what I converted to ONXX model. As you can see on attached model sumerry image, models has convolution layers with padding 1,1,1,1 but my actual input image size is 256x256. So, can you tell me if pad 1 is killing my tensorrt performance??

Hi,
Can you try running your model with trtexec command, and share the “”–verbose"" log in case if the issue persist
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

You can refer below link for all the supported operators list, in case any operator is not supported you need to create a custom plugin to support that operation

Also, request you to share your model and script if not shared already so that we can help you better.

Meanwhile, for some common errors and queries please refer to below link:
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/#error-messaging
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/#faq

Thanks!