Onnx Model parsing failed in TensorRT 7

Description

I have an onnx model with zeropadding2D layer which is based on DeepLapV2 segmentation architecture which is in opset 11. When I am trying to convert into TensorRT engine, TensorRT engine creation is getting failed. Based on one suggestion in github, I have converted the onnx model to opset 10 model. After that TensorRT engine is getting created. With that model, there is mismatch in result between TensorRT inference and TF inference. Whether ZeropaddingLayer2D is officially supported in TenorRT? Instead of Zeropaddinglayer2D, I have also tried with tf.pad layer. With that TensorRT engine creation is failing. Could you please help me to resolve this issue. Due to security concerns, I am helpless to share the onnx model.

Environment

TensorRT Version: TensorRT 7
GPU Type: GeForce GTX 1050 Ti
Nvidia Driver Version: 441.41
CUDA Version: Cuda 10.0
CUDNN Version: Cudnn 7.6.4
Operating System + Version: Windows 10
Python Version (if applicable): Python 3.7.4
TensorFlow Version (if applicable): Tensorflow 2.0
Onnx version: 1.6.0
Opset version: 11

Waiting for your valuable reply

Hi @abhinandtm.t
Request you to share the verbose logs while generating TRT engine.

Thanks!

Hi Thanks for the quick reply,
Following observations are made
1.) Verbose log file while generating TensorRT engine with opset 11 is as follows.


In this case TensorRT engine creation is failed.

2.) Verbose log file while generating TensorRT engine with opset 10 is as follows.

In this case TenorRT engine creation is successful. But TensorRT inference result is different with Tensorflow inference. Both the model contain Dilated convolution.
It’s observed that onnx model with opset 10 which contain zero padding layer and normal convolution inference results are matching in both TensorRT and Tensorflow.

Whether Dilated convolution is supporting in TensorRT?
Waiting for your valuable reply.
Thanks

Hi @abhinandtm.t,
Can you please share the entire verbose logs along with the error.

Thanks!