Could not parse ONNX model from file


When trying to convert ONNX file into TensorRT engine file on my Jetson Nano I get following error:

ERROR: builtin_op_importers.cpp:3422 In function importResize:
[8] Assertion failed: scales.is_weights() && "Resize scales must be an initializer!"
|yolov5|error|[Builder] buildEngine() failure: could not parse ONNX model from file

Detailed log of conversion is attached.
As can be seen in log, initial Pytorch model was generated using Yolo5 and converted to ONNX through:


TensorRT Version: 8.0.1
GPU Type: Jetson Nano (Maxwell)
CUDA Version: 10.2.300
CUDNN Version: 8.2.1
Operating System + Version: Ubuntu 18.04 (Jetpack 4.6)
PyTorch Version (if applicable): 1.12.1

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

TensorRT conversion log.txt (104.6 KB)

Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging

Here are:
the ONNX model:
CantConvert.onnx (27.2 MB)

and conversion script: (22.8 KB)

The log of failed ONNX to TRT conversion is already attached in my previous post.

Any opinion?

Hi @JoleCRO ,
By looking at the error, the issue looks like coming from the opset.
This post might be relevant to resolve this issue.


Link you provided seems outdated and it uses TRT7 and I use TRT8.
Nevertheless, I tried all eight possible (and impossible) Opset versions (from ver.9 till 16) and still keep getting the same error regardless the opset I try ?
The same ONNX to TRT conversion script (which I have provided above) did successful job when I got the PT model from some other Pytorch version (don’t know which it was), but why my current 1.12.1 would generate problematic model whose ONNX cannot be successfully converted to TRT ??

Any ideas regarding this? Log, onnx and script files I have already provided in the post from four months ago…


We are able to successfully build the TensorRT engine on the latest version, 8.6.1.

[09/05/2023-09:06:09] [I]
&&&& PASSED TensorRT.trtexec [TensorRT v8601] # trtexec --onnx=CantConvert.onnx --verbose

We recommend you to try on the latest version.
Also, please make sure your conversion script is correct. Please refer to the following samples for your reference.

Thank you.


Thanks for answer but since – as far I understood you suggested me to use Trtexec tool to do ONNX → TRT file conversion – two unclear things remained:

  1. Do I have to install Trtexec on the same board type that I run inference on (in my case: Jetson Nano with Jetpack 4.6.0 with TensorRT ver. 8.0.1 included). I guess I cannot upgrade Jetpack’s TRT ver. >8.1, and by no means until TRT ver 8.6 which you mentioned, right?

  2. When I try to build Trtexec from: , there is no MakeFile in samples/trtexec folder so make command (as you suggest it in manuals) fails with No targets specified and no makefile found. How to compile Trtexec then?

Any advice on above two issues?