I’m trying to accelerate model inference speed by TensorRT, the model has been first convert to onnx format from tensorflow saved model using tf2onnx .
When I parse the onnx model using
tensorrt.OnnxParser(), I got this error:
[TensorRT] VERBOSE: ModelImporter.cpp:103: Parsing node: generator/G_MODEL/A/MirrorPad [Pad] [TensorRT] VERBOSE: ModelImporter.cpp:119: Searching for input: generator_input:0 [TensorRT] VERBOSE: ModelImporter.cpp:119: Searching for input: const_fold_opt__1670 [TensorRT] VERBOSE: ModelImporter.cpp:125: generator/G_MODEL/A/MirrorPad [Pad] inputs: [generator_input:0 -> (1, -1, -1, 3)], [const_fold_opt__1670 -> (8)], ERROR: Failed to parse the ONNX file. number of errors: 1 In node -1 (importPad): UNSUPPORTED_NODE: Assertion failed: mode == "constant" && value == 0.f && "This version of TensorRT only supports constant 0 padding!"
I read some related issues posted by others, It seems like TensorRT can only support constant 0 padding, but in my model, the pad in reflect mode. In this case, what can I do to solve this problem?
Deepest thanks for your reply!
TensorRT Version: 220.127.116.11
GPU Type: GTX 1080
Nvidia Driver Version: 450.51.06
CUDA Version: 10.0
CUDNN Version: 7.6.4
Operating System + Version: Ubuntu 16.04
Python Version (if applicable): 3.7.10
TensorFlow Version (if applicable): tensorflow-gpu 1.15.0
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):