yolov3_to_onnx.py sample failure

Hi all,

I’m attempting to follow the instructions for the tensorrt sample: https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#yolov3_onnx

I have installed onnx-tensorrt - does this error likely indicate that something is wrong with that installation?

Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
Layer of type yolo not supported, skipping ONNX node generation.
graph YOLOv3-608 (
  %000_net[FLOAT, 64x3x608x608]
) initializers (
  %001_convolutional_bn_scale[FLOAT, 32]
  %001_convolutional_bn_bias[FLOAT, 32]

.
.
.
.
%105_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [3, 3], strides = [1, 1]](%104_convolutional_lrelu, %105_convolutional_conv_weights)
  %105_convolutional_bn = BatchNormalization[epsilon = 9.99999974737875e-06, momentum = 0.990000009536743](%105_convolutional, %105_convolutional_bn_scale, %105_convolutional_bn_bias, %105_convolutional_bn_mean, %105_convolutional_bn_var)
  %105_convolutional_lrelu = LeakyRelu[alpha = 0.100000001490116](%105_convolutional_bn)
  %106_convolutional = Conv[auto_pad = u'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%105_convolutional_lrelu, %106_convolutional_conv_weights, %106_convolutional_conv_bias)
  return %082_convolutional, %094_convolutional, %106_convolutional
}
Traceback (most recent call last):
  File "yolov3_to_onnx.py", line 761, in <module>
    main()
  File "yolov3_to_onnx.py", line 754, in main
    onnx.checker.check_model(yolov3_model_def)
  File "/home/luke/.local/lib/python2.7/site-packages/onnx/checker.py", line 77, in check_model
    C.check_model(model.SerializeToString())
onnx.onnx_cpp2py_export.checker.ValidationError: Input index 3 must be set to consumed for operator BatchNormalization

==> Context: Bad node spec: input: "001_convolutional" input: "001_convolutional_bn_scale" input: "001_convolutional_bn_bias" input: "001_convolutional_bn_mean" input: "001_convolutional_bn_var" output: "001_convolutional_bn" name: "001_convolutional_bn" op_type: "BatchNormalization" attribute { name: "epsilon" f: 1e-05 type: FLOAT } attribute { name: "momentum" f: 0.99 type: FLOAT }

Hello,

to verify your onnx-tensorrt installation, please see https://github.com/onnx/onnx-tensorrt#tests

test_abs_cpu (__main__.OnnxBackendNodeTest) ... skipped u"Backend doesn't support device CPU"
test_abs_cuda (__main__.OnnxBackendNodeTest) ... (Unnamed Layer* 0) [Unary]
(4, 5)
> /home/luke/onnx-tensorrt/onnx_tensorrt/backend.py(111)__init__()
-> trt_engine = self.builder.build_cuda_engine(self.network)
(Pdb)

It just does this and then nothing else? is that fatal?

I have i7-8750H

What version of onnx are you using? I remember i had issues with the onnx conversion when using versions other than 1.4.1.

+1

changed my requirements file to install at most version 1.4.1 for onnx and it solved the problem