some ONNX layers cannot be parsed by TensorRT

Hi,

We found some ONNX layers cannot be parsed by TensorRT, but in TensorRT ONNX support Matrix it says these layers can be supported. Please help to check below issues

1 Argmax

[i]onnx: v.onnx
fp16

Input filename: v.onnx
ONNX IR version: 0.0.5
Opset version: 10
Producer name: PaddlePaddle
Producer version:
Domain:
Model version: 0
Doc string:

WARNING: ONNX model has a newer ir_version (0.0.5) than this parser was built against (0.0.3).
While parsing node number 68 [ArgMax -> “arg_max_0.tmp_0@argmax”]:
ERROR: /home/erisuser/p4sw/sw/gpgpu/MachineLearning/DIT/release/5.0/parsers/onnxOpenSource/ModelImporter.cpp:142 In function importNode:
[8] No importer registered for op: ArgMax
failed to parse onnx file
Engine could not be created
Engine could not be created[/i]

2 Reshape
[i]onnx: dr_diagnosis.onnx
fp16

Input filename: d.onnx
ONNX IR version: 0.0.5
Opset version: 10
Producer name: PaddlePaddle
Producer version:
Domain:
Model version: 0
Doc string:

WARNING: ONNX model has a newer ir_version (0.0.5) than this parser was built against (0.0.3).
While parsing node number 428 [Reshape -> “bilinear_pooling_reshape_2D.tmp_0”]:
ERROR: /home/erisuser/p4sw/sw/gpgpu/MachineLearning/DIT/release/5.0/parsers/onnxOpenSource/builtin_op_importers.cpp:1314 In function importReshape:
[8] Assertion failed: get_shape_size(new_shape) == get_shape_size(tensor.getDimensions())
failed to parse onnx file
Engine could not be created
Engine could not be created[/i]

3 Add
[i]onnx: l.onnx
fp16

Input filename: l.onnx
ONNX IR version: 0.0.5
Opset version: 10
Producer name: PaddlePaddle
Producer version:
Domain:
Model version: 0
Doc string:

WARNING: ONNX model has a newer ir_version (0.0.5) than this parser was built against (0.0.3).
While parsing node number 3 [Add -> “conv1.tmp_1”]:
ERROR: /home/erisuser/p4sw/sw/gpgpu/MachineLearning/DIT/release/5.0/parsers/onnxOpenSource/builtin_op_importers.cpp:328 In function importAdd:
[8] Assertion failed: get_shape_size(shift_weights.shape) == get_shape_size(dims)
failed to parse onnx file
Engine could not be created
Engine could not be created[/i]

4 matmul

when running to matmul layer, it reports segment fault

Please help to check above issues

Hi,

These four operations are directly supported by TensorRT from v5.1.2
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-515/tensorrt-release-notes/tensorrt-5.html#rel_5-1-2-RC

It’s recommended to upgrade your software with the latest JetPack first.
Thanks.

Hi,

I have many softwares, codes and settings in my Nano device, upgrade with latest JetPack will make it hard to re-build my environment.

I have checked, there are no latest TensorRT and TensorRT v5.1.2 binary release for Jetson devices

How to install the latest TensorRT in my Nano then?

Hi,

JetPack4.2.2 support TensorRT5.1.6.1:
https://docs.nvidia.com/jetson/archives/jetpack-archived/jetpack-421/release-notes/index.html#additional-details

Thanks.

Hi,

I have updated the JetPack on Nano, but the issues are the same.

These OP are still not supported.

Have you ever encountered these issues?do you have some suggestions to debug these issues?

Hi,

Does TensorRT support L2-Normalize in ONNX models?

Hi,

Please see attached ONNX file to check whether it is the L2-Normalize op support issue, this OP affect our three models, please help to check ASAP
dd_new_onnx.zip (36 MB)

Hi,

Could you explain more about the L2-Normalize op you want?
In general, we apply L2 norm operation in our batch normalization layer.

Thanks.

Hi hi-bigcat,

Is this still an issue to support? or issue has been clarified and resolved?