Description
I have an onnx model with zeropadding2D layer which is based on DeepLapV2 segmentation architecture which is in opset 11. When I am trying to convert into TensorRT engine, TensorRT engine creation is getting failed. Based on one suggestion in github, I have converted the onnx model to opset 10 model. After that TensorRT engine is getting created. With that model, there is mismatch in result between TensorRT inference and TF inference. Whether ZeropaddingLayer2D is officially supported in TenorRT? Instead of Zeropaddinglayer2D, I have also tried with tf.pad layer. With that TensorRT engine creation is failing. Could you please help me to resolve this issue. Due to security concerns, I am helpless to share the onnx model.
Environment
TensorRT Version: TensorRT 7
GPU Type: GeForce GTX 1050 Ti
Nvidia Driver Version: 441.41
CUDA Version: Cuda 10.0
CUDNN Version: Cudnn 7.6.4
Operating System + Version: Windows 10
Python Version (if applicable): Python 3.7.4
TensorFlow Version (if applicable): Tensorflow 2.0
Onnx version: 1.6.0
Opset version: 11
Waiting for your valuable reply