Assertion failed: scales.is_weights() && "Resize scales must be an initializer!"

Description

A clear and concise description of the bug or issue.

Environment

TensorRT Version: 7.2.3.4
GPU Type: 3080
Nvidia Driver Version: 470
CUDA Version: 11.1
CUDNN Version: 8.1.1
Operating System + Version: ubuntu 20.04
Python Version (if applicable): 3.8
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.9
Baremetal or Container (if container which image + tag): Baremetal

Hey,
My code look like this:

logger = trt.Logger(trt.Logger.ERROR)

create a builder

builder = trt.Builder(logger)

create a network with explicit batch!

network = builder.create_network(1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))

we create onnx parser

parser = trt.OnnxParser(network, logger)

read the model file

success = parser.parse_from_file(“MyModel.onnx”)
for idx in range(parser.num_errors):
print(parser.get_error(idx))
if not success:
print(“Parser from file failed”)

and I get the Error:

ERROR: builtin_op_importers.cpp:2651 In function importResize:
[8] Assertion failed: scales.is_weights() && “Resize scales must be an initializer!”

My question is how can I debug this issue?
I have a really big model

What is the best debugging process when I see an error like this?

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Trtexec gives me the same error
I cannot check the onnx file with the onnx checker because I wrote a plugin for an unsupported layer and when I exported The onnx I disabled the checker, I think the layer is good and the error comes from a different place

I can open the onnx file with netron but the model is very big, I just want to understand what is the debug process and how can I identify the problems, from my understanding I have a lot of problems with the model and I want to identify each and every one of the problems

Hi,

Hope following may help you.

We also recommend you to please try on latest TensorRT 8.2 EA version.

Thank you.