Description
error converting onnx model Tensorrt ( Steps To Reproduce)
Environment
TensorRT Version: 7
GPU Type: 930M
Nvidia Driver Version: 440
CUDA Version: 10.2
CUDNN Version: 7.6.5
Operating System + Version: ubuntu 18.04
Python Version (if applicable): 3.7
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Steps To Reproduce
I saved a tf 2.1 float32 model using model.save()
after that I convert this model to onnx by tf2onnx python api
with both opset 10 and 11.
for TENSORRT i used this code: ops11 failed with this …
./trtexec --explicitBatch --onnx=model_op11.onnx
[04/12/2020-15:37:34] [I] === Model Options ===
[04/12/2020-15:37:34] [I] Format: ONNX
[04/12/2020-15:37:34] [I] Model: model_op11.onnx
[04/12/2020-15:37:34] [I] Output:
[04/12/2020-15:37:34] [I] === Build Options ===
[04/12/2020-15:37:34] [I] Max batch: explicit
[04/12/2020-15:37:34] [I] Workspace: 16 MB
[04/12/2020-15:37:34] [I] minTiming: 1
[04/12/2020-15:37:34] [I] avgTiming: 8
[04/12/2020-15:37:34] [I] Precision: FP32
[04/12/2020-15:37:34] [I] Calibration:
[04/12/2020-15:37:34] [I] Safe mode: Disabled
[04/12/2020-15:37:34] [I] Save engine:
[04/12/2020-15:37:34] [I] Load engine:
[04/12/2020-15:37:34] [I] Inputs format: fp32:CHW
[04/12/2020-15:37:34] [I] Outputs format: fp32:CHW
[04/12/2020-15:37:34] [I] Input build shapes: model
[04/12/2020-15:37:34] [I] === System Options ===
[04/12/2020-15:37:34] [I] Device: 0
[04/12/2020-15:37:34] [I] DLACore:
[04/12/2020-15:37:34] [I] Plugins:
[04/12/2020-15:37:34] [I] === Inference Options ===
[04/12/2020-15:37:34] [I] Batch: Explicit
[04/12/2020-15:37:34] [I] Iterations: 10
[04/12/2020-15:37:34] [I] Duration: 3s (+ 200ms warm up)
[04/12/2020-15:37:34] [I] Sleep time: 0ms
[04/12/2020-15:37:34] [I] Streams: 1
[04/12/2020-15:37:34] [I] ExposeDMA: Disabled
[04/12/2020-15:37:34] [I] Spin-wait: Disabled
[04/12/2020-15:37:34] [I] Multithreading: Disabled
[04/12/2020-15:37:34] [I] CUDA Graph: Disabled
[04/12/2020-15:37:34] [I] Skip inference: Disabled
[04/12/2020-15:37:34] [I] Inputs:
[04/12/2020-15:37:34] [I] === Reporting Options ===
[04/12/2020-15:37:34] [I] Verbose: Disabled
[04/12/2020-15:37:34] [I] Averages: 10 inferences
[04/12/2020-15:37:34] [I] Percentile: 99
[04/12/2020-15:37:34] [I] Dump output: Disabled
[04/12/2020-15:37:34] [I] Profile: Disabled
[04/12/2020-15:37:34] [I] Export timing to JSON file:
[04/12/2020-15:37:34] [I] Export output to JSON file:
[04/12/2020-15:37:34] [I] Export profile to JSON file:
[04/12/2020-15:37:34] [I]
Input filename: model_op11.onnx
ONNX IR version: 0.0.6
Opset version: 11
Producer name: tf2onnx
Producer version: 1.6.0
Domain:
Model version: 0
Doc string:
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:37:34] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
terminate called after throwing an instance of ‘std::out_of_range’
what(): Attribute not found: pads