assertion failed: onnx_padding.size() == 8

Hi,

I have a onnx model that runs inference on 3D medical images.
I’ve read that TensorRT v6 added support for this 3D stuff.

However, when I try to use the c++ onnx parser in TensorRT-6.0.1.5 on windows platform, I have the following error messages:

WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
While parsing node number 0 [pad]:
ERROR: builtin_op_importers.cpp:1415 In function importPad:
[8] Assertion failed: onnx_padding.size() == 8.

When I check the source code builtin_op_importers.cpp, it seems like it
wants the padding associated with 2D input?

Does anybody know why TensorRT not accept my onnx model?
Also, any suggestions on what I should do next?

Thanks a lot!
John

BTW: the top lines in my onnx file looks like this:

ir_version: 4
producer_name: “pytorch”
producer_version: “1.1”
graph {
node {
input: “myinput”
output: “127”
op_type: “Pad”
attribute {
name: “mode”
s: “constant”
type: STRING
}
attribute {
name: “pads”
ints: 0
ints: 0
ints: 1
ints: 1
ints: 1
ints: 0
ints: 0
ints: 1
ints: 1
ints: 1
type: INTS
}
attribute {
name: “value”
f: 0.0
type: FLOAT
}
}

Hi,

Currently TensorRT only supports 2D padding layer.
We do support 3D padding within other 3D operation such as convolution and pooling, but not as a standalone padding layer.

Please refer below link for layer support matrix:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html#layers-matrix

Thanks

Thanks SunilJB!

Yeah, I figured it our yesterday that they are actually independent padding layer from the Conv3D.
I managed to modify the onnx model itself to integrate pad mode into the Conv3D.

Thanks again.
Zheng