I have a onnx model that runs inference on 3D medical images.
I’ve read that TensorRT v6 added support for this 3D stuff.
However, when I try to use the c++ onnx parser in TensorRT-220.127.116.11 on windows platform, I have the following error messages:
WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
While parsing node number 0 [pad]:
ERROR: builtin_op_importers.cpp:1415 In function importPad:
 Assertion failed: onnx_padding.size() == 8.
When I check the source code builtin_op_importers.cpp, it seems like it
wants the padding associated with 2D input?
Does anybody know why TensorRT not accept my onnx model?
Also, any suggestions on what I should do next?
Thanks a lot!
BTW: the top lines in my onnx file looks like this: