Assertion failed: scales.is_weights() && "Resize scales must be an initializer!"

Description

I was trying to convert an onnx model to tensor engine file in a docker container of tensorrt (docker pull nvcr.io/nvidia/tensorrt:20.03-py3
)
and got following error:
Assertion failed: scales.is_weights() && “Resize scales must be an initializer!”

Can you please guide what’s the issue here?

Environment

TensorRT Version: 7.1
CUDA Version: 10.2
Operating System + Version: Ubuntu 18.04

Relevant Files

Here’s model: keras-detect-model.onnx - Google Drive

Steps To Reproduce

trtexec --onnx=keras-detect-model.onnx --explicitBatch --verbose

Hi @aaryan,
By looking at the error, the issue looks like coming from the opset.
Below post should be able to address it.

However, request you to share your script in case if issue persist.
Thanks!

@AakankshaS
I have used opset-11 already for this model. Below is script attached. Please check the issue.
Script:
Here’s Script:

Hi @AakankshaS , just wanted to know if there’s any update on this topic?

Hi @aaryan,
A quick look to your model shows that you are using resize layer with scale factor of output tensor from concat layer


I am afraid, this is not supported by TensorRT yet. We only support scale factor as constant weights, that’s why the error happened

Thanks!

I had the exact same error.
Only using the latest tensorrt docker container (21.11-py3) and opset=12 managed to export my model succesfully.

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

I met the same error when I converted my yolov5x fp16 to trt but not while yolov5x fp32, here is my model file and pytorch version: 1.12, cuda 11…3 trt version 8.0.1.6

Thanks!