TensorRT-5.0.2.6 yolov3_onnx sample error!

Hello everyone, I have installed the dependancies following the README.md. While the errors occured,

Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
graph YOLOv3-608 (
  %000_net[FLOAT, 64x3x608x608]
) initializers (
  %001_convolutional_bn_scale[FLOAT, 32]
................
%106_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%105_convolutional_lrelu, %106_convolutional_conv_weights, %106_convolutional_conv_bias)
  return %082_convolutional, %094_convolutional, %106_convolutional

Traceback (most recent call last):
  File "yolov3_to_onnx.py", line 760, in <module>
    main()
  File "yolov3_to_onnx.py", line 753, in main
    onnx.checker.check_model(yolov3_model_def)
  File "/search/speech/tk/.local/lib/python2.7/site-packages/onnx/checker.py", line 86, in check_model
    C.check_model(model.SerializeToString())
onnx.onnx_cpp2py_export.checker.ValidationError: Node (086_upsample) has input size 1 not in range [min=2, max=2].

==> Context: Bad node spec: input: "085_convolutional_lrelu" output: "086_upsample" name: "086_upsample" op_type: "Upsample" attribute { name: "mode" s: "nearest" type: STRING } attribute { name: "scales" floats: 1 floats: 1 floats: 2 floats: 2 type: FLOATS }

Could you give me some advices here? Thanks!

any suggestions?

the error occurred by the dis-match of different onnx version, try to run the following commands:

pip uninstall onnx
pip install onnx=1.2.2

@tiankai I ran into the same problem. And it works now. Thanks!

Hi, tiankai, the same error, I tried onnx==1.2.2, but it didn’t work. Have you solved the problem? Thank you!

Hello,

This example is a mixture using python2 and python3. Check that your pip is connected to the correct version of python…

I can confirm that pip install onnx==1.2.2
fixed the issue for me in jetson nano.

Terveisin, Markus

maybe you install onnx 1.5.0 with python 2.7, it can help you
Best,
Mary

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!