Hi,
I have a onnx model that runs inference on 3D medical images.
I’ve read that TensorRT v6 added support for this 3D stuff.
However, when I try to use the c++ onnx parser in TensorRT-6.0.1.5 on windows platform, I have the following error messages:
WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
While parsing node number 0 [pad]:
ERROR: builtin_op_importers.cpp:1415 In function importPad:
[8] Assertion failed: onnx_padding.size() == 8.
When I check the source code builtin_op_importers.cpp, it seems like it
wants the padding associated with 2D input?
Does anybody know why TensorRT not accept my onnx model?
Also, any suggestions on what I should do next?
Thanks a lot!
John
BTW: the top lines in my onnx file looks like this:
ir_version: 4
producer_name: “pytorch”
producer_version: “1.1”
graph {
node {
input: “myinput”
output: “127”
op_type: “Pad”
attribute {
name: “mode”
s: “constant”
type: STRING
}
attribute {
name: “pads”
ints: 0
ints: 0
ints: 1
ints: 1
ints: 1
ints: 0
ints: 0
ints: 1
ints: 1
ints: 1
type: INTS
}
attribute {
name: “value”
f: 0.0
type: FLOAT
}
}