Resize error

Relevant Files

Tensorflow model zoo

Steps To Reproduce

I downloaded the pre-trained model from Tensorflow model zoo. After that I tried running tf2onnx command:
python3 -m tf2onnx.convert --saved-model models --output tf_model_op11.onnx --opset 11
This gave an onnx file with uint8 type. So I used Onnx graph surgeon tool to change the type:
graph = gs.import_onnx(onnx.load("tf_model_op11.onnx"))
for inp in graph.inputs:
inp.dtype = np.float32

After getting the onnx file, I ran the command:
trtexec --onnx=model_op11.onnx --fp16=enable --workspace=5500 --batch=1 --saveEngine=model_op11.trt --verbose
It had dynamic input so I checked this command as well to see if I get different error
trtexec --explicitBatch --optShapesCalib=:1x1024x1024x3 --optShapes=:1x1024x1024x3 --onnx=model_op11_v2.onnx --saveEngine=model_op11.trt --shapes=\'input_tensor:0\':1x1024x1024x3 --fp16=enable --workspace=5500 --useCudaGraph=enable --exportTimes=times.json --verbose
This gave me the same error as above. The suggested solution of downgrading torch won’t work as it wasn’t involved.

[06/11/2021-05:02:47] [I] === Model Options ===
[06/11/2021-05:02:47] [I] Format: ONNX
[06/11/2021-05:02:47] [I] Model: model_op11.onnx
[06/11/2021-05:02:47] [I] Output:
[06/11/2021-05:02:47] [I] === Build Options ===
[06/11/2021-05:02:47] [I] Max batch: 1
[06/11/2021-05:02:47] [I] Workspace: 5500 MB
[06/11/2021-05:02:47] [I] minTiming: 1
[06/11/2021-05:02:47] [I] avgTiming: 8
[06/11/2021-05:02:47] [I] Precision: FP32+FP16
[06/11/2021-05:02:47] [I] Calibration:
[06/11/2021-05:02:47] [I] Safe mode: Disabled
[06/11/2021-05:02:47] [I] Save engine: model_op11.trt
[06/11/2021-05:02:47] [I] Load engine:
[06/11/2021-05:02:47] [I] Builder Cache: Enabled
[06/11/2021-05:02:47] [I] NVTX verbosity: 0
[06/11/2021-05:02:47] [I] Inputs format: fp32:CHW
[06/11/2021-05:02:47] [I] Outputs format: fp32:CHW
[06/11/2021-05:02:47] [I] Input build shapes: model
[06/11/2021-05:02:47] [I] Input calibration shapes: model
[06/11/2021-05:02:47] [I] === System Options ===
[06/11/2021-05:02:47] [I] Device: 0
[06/11/2021-05:02:47] [I] DLACore:
[06/11/2021-05:02:47] [I] Plugins:
[06/11/2021-05:02:47] [I] === Inference Options ===
[06/11/2021-05:02:47] [I] Batch: 1
[06/11/2021-05:02:47] [I] Input inference shapes: model
[06/11/2021-05:02:47] [I] Iterations: 10
[06/11/2021-05:02:47] [I] Duration: 3s (+ 200ms warm up)
[06/11/2021-05:02:47] [I] Sleep time: 0ms
[06/11/2021-05:02:47] [I] Streams: 1
[06/11/2021-05:02:47] [I] ExposeDMA: Disabled
[06/11/2021-05:02:47] [I] Spin-wait: Disabled
[06/11/2021-05:02:47] [I] Multithreading: Disabled
[06/11/2021-05:02:47] [I] CUDA Graph: Disabled
[06/11/2021-05:02:47] [I] Skip inference: Disabled
[06/11/2021-05:02:47] [I] Inputs:
[06/11/2021-05:02:47] [I] === Reporting Options ===
[06/11/2021-05:02:47] [I] Verbose: Enabled
[06/11/2021-05:02:47] [I] Averages: 10 inferences
[06/11/2021-05:02:47] [I] Percentile: 99
[06/11/2021-05:02:47] [I] Dump output: Disabled
[06/11/2021-05:02:47] [I] Profile: Disabled
[06/11/2021-05:02:47] [I] Export timing to JSON file:
[06/11/2021-05:02:47] [I] Export output to JSON file:
[06/11/2021-05:02:47] [I] Export profile to JSON file:

Input filename: model_op11.onnx
ONNX IR version: 0.0.7
Opset version: 11
Producer name:
Producer version:
Model version: 0
Doc string:

[06/11/2021-05:02:49] [V] [TRT] ModelImporter.cpp:103: Parsing node: StatefulPartitionedCall/Preprocessor/ResizeToRange/cond [If]
[06/11/2021-05:02:49] [V] [TRT] ModelImporter.cpp:119: Searching for input: StatefulPartitionedCall/Preprocessor/ResizeToRange/Less:0
[06/11/2021-05:02:49] [V] [TRT] ModelImporter.cpp:125: StatefulPartitionedCall/Preprocessor/ResizeToRange/cond [If] inputs: [StatefulPartitionedCall/Preprocessor/ResizeToRange/Less:0 → ()],
ERROR: builtin_op_importers.cpp:1554 In function importIf:
[8] Assertion failed: cond.is_weights() && cond.weights().count() == 1 && “If condition must be a initializer!”
[06/11/2021-05:02:49] [E] Failed to parse onnx file
[06/11/2021-05:02:49] [E] Parsing model failed
[06/11/2021-05:02:49] [E] Engine creation failed
[06/11/2021-05:02:49] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # trtexec --onnx=model_op11.onnx --fp16=enable --workspace=5500 --batch=1 --saveEngine=model_op11.trt --verbose

The whole verbose output

TensorRT Version: 7.1.3-1+cuda10.2
NVIDIA GPU: NVIDIA Tegra Xavier (nvgpu)/integrated
NVIDIA Driver Version: 32.4.4 Release Build
CUDA Version: 10.2
Operating System: Ubuntu 18.04
Python Version (if applicable): 3.6.9
Tensorflow Version (if applicable): 2.4
Graphics : NVIDIA Tegra Xavier (nvgpu)/integrated
Processor : ARMv8 Processor rev 0 (v8l) × 6

In another post, the suggestion was to downgrade pytorch version but I’m not using pytorch anywhere.


Could you share which model do you try to convert?
And the corresponding ONNX file with us?


These are some of the onnx samples.


This is a known issue.
The error is caused from onnx2trt assertion below:

In short, TensorFlow use a If operator which is not supported by TensorRT and ONNX parser.

We are checking if any WAR can be applied to solve this issue.
Will share more information with you once we got a progress.



We are going to provide an EfficientDet example along with TensorRT v8.0 for Jetson.
Please wait for our next JetPack software release.


1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.