Error while trying to onnx model file to trt engine

Hi,

While trying to convert onnx custom model from torch to tensorrt :

Earlier I was facing some error :

ERROR: onnx2trt_utils.hpp:277 In function convert_axis: [8] Assertion failed: axis >= 0 && axis < nbDims

Which got solved by replacing torch.view by torch.flatten() and generating onnx again.

It went to next node in the graph but still getting the following error:

ERROR: builtin_op_importers.cpp:728 In function importConcat:

[8] Assertion failed: input.is_tensor()

Please help!

Hi,

Can you provide the following information so we can better help?
Provide details on the platforms you are using:
o Linux distro and version
o GPU type
o Nvidia driver version
o CUDA version
o CUDNN version
o Python version [if using python]
o Tensorflow and PyTorch version
o TensorRT version

Also, if possible please share the script & model file to reproduce the issue.

Meanwhile, could you please try to use “trtexec” command to test the model.
“trtexec” useful for benchmarking networks and would be faster and easier to debug the issue using “–verbose” mode.

Thanks

Hi,

I was using “trtexec” command for testing the model .

./trtexec --verbose --onnx=/tmp/model22.onnx --saveEngine=/tmp/top_down.engine

–Cuda version 10.0
–Jetson TX2 with Jetpack 4.2.3

This is some end portion of the log when I use verbose option.

[V] [TRT] 677:Transpose -> (2, 2, 16)
[V] [TRT] 678:Flatten -> (4800)
[V] [TRT] 679:Flatten -> (1200)
[V] [TRT] 680:Flatten -> (300)
[V] [TRT] 681:Flatten -> (72)
[V] [TRT] 682:Flatten -> (32)
[V] [TRT] 683:Concat -> (6404)
[V] [TRT] 684:Constant -> (3)
[V] [TRT] 685:Reshape -> (3202, 2)
[V] [TRT] 686:Flatten -> (9600)
[V] [TRT] 687:Flatten -> (2400)
[V] [TRT] 688:Flatten -> (600)
[V] [TRT] 689:Flatten -> (144)
[V] [TRT] 690:Flatten -> (64)
[V] [TRT] 691:Concat -> (12808)
[V] [TRT] 692:Constant -> (3)
[V] [TRT] 693:Reshape -> (3202, 4)
[V] [TRT] 694:Softmax -> (3202, 2)
[V] [TRT] 695:Constant -> (1, 3202, 4)
While parsing node number 235 [Concat -> “696”]:
— Begin node —
input: “694”
input: “693”
input: “695”
output: “696”
op_type: “Concat”
attribute {
name: “axis”
i: 2
type: INT
}

— End node —
ERROR: builtin_op_importers.cpp:728 In function importConcat:
[8] Assertion failed: input.is_tensor()
[E] failed to parse onnx file
[E] Engine could not be created
[E] Engine could not be created

Thanks

Hi,

As per the trtexec output, it seems all the input tensors to concat layer [Concat -> “696”] are not of same shape.
In case of concat operation, all input tensors must have the same shape, except for the dimension size of the axis to concatenate on.

Please refer to below link for more details:
https://github.com/onnx/onnx/blob/master/docs/Operators.md#Concat

Thanks

But both input tensors has same shape, I just checked. Both are torch.float64 dtype.

Hi,

As per trtexec output that you shared, the concat layer seems to have 3 inputs and shape of 695:Constant -> (1, 3202, 4) layer is different from other 2 input shapes eg: 694:Softmax -> (3202, 2).
All input tensors might have same dtype but issue seems to be due to shape.

— Begin node —
input: “694”
input: “693”
input: “695”
output: “696”
op_type: “Concat”
attribute {
name: “axis”
i: 2
type: INT
}

[V] [TRT] 693:Reshape -> (3202, 4)
[V] [TRT] 694:Softmax -> (3202, 2)
[V] [TRT] 695:Constant -> (1, 3202, 4)

If possible could you please share the script & model file so we can better help?

Thanks