Mul OP(last OP in models) issue in Jetpack 4.2.2

Hi,

We encounter a issue as below shows in Jetpack 4.2.2 (TensorRT version 5.1.6.1) while in old Jetpack there is no such issue:

[i]While parsing node number 138 [Mul -> “save_infer_model/scale_0@tmp”]:
— Begin node —
input: “save_infer_model/scale_0@scale”
input: “sigmoid_0.tmp_0”
output: “save_infer_model/scale_0@tmp”
op_type: “Mul”

— End node —
ERROR: builtin_op_importers.cpp:353 In function importScaleOp:
[8] Assertion failed: get_shape_size(weights.shape) == get_shape_size(dims)
[E] failed to parse onnx file
[E] Engine could not be created
[E] Engine could not be created
&&&& FAILED TensorRT.trtexec # ./trtexec --onnx=treatment_classification.onnx --verbose[/i]

Please help to check ASAP as this issue affect all of our ONNX models, thanks
mul.png

we have ( mul + add) op in the end of each ONNX models, we don’t know why this mul op cannot be parsed in lastest Jetpack while in old Jetpack this (mul + add) can be parsed

Hi,

Could you provide a minimal reproducible onnx file so we can try on?
Thanks.

Hi,

This issue is fixed in the open sourced ONNX-TensorRT parser.
Please use it to convert the onnx model into TensorRT instead.

$ git clone -b 5.1 --recursive https://github.com/onnx/onnx-tensorrt.git
$ cd onnx-tensorrt/
$ mkdir build
$ cd build/
$ cmake .. -DTENSORRT_ROOT=/usr/src/tensorrt -DGPU_ARCHS="53" -DCUDA_INCLUDE_DIRS=/usr/local/cuda-10.0/include
$ make
$ sudo make install
onnx2trt treatment_classification.onnx -o treatment_classification.trt

Thanks.

Hi,

I have updated the platform with above information, but it says MaxPool issue as below, rather strange
[i]
Input filename: …/…/tensorrt/bin/treatment_classification.onnx
ONNX IR version: 0.0.5
Opset version: 10
Producer name: PaddlePaddle
Producer version:
Domain:
Model version: 0
Doc string:

WARNING: ONNX model has a newer ir_version (0.0.5) than this parser was built against (0.0.3).
Parsing model
While parsing node number 3 [MaxPool -> “pool2d_0.tmp_0”]:
ERROR: /usr/src/onnx-tensorrt/builtin_op_importers.cpp:1228 In function importMaxPool:
[8] Assertion failed: ctx->getOpsetVersion() < 10[/i]

This model can be parsed in old JetPack

I attached the model in this loop, this issue affects at least our 4 models, please check it ASAP, thanks very much
treatment_classification_onnx.zip (75.5 MB)

Hi,

Do you have any update now? waiting for response with anxiety

Hi,

That’s because TensorRT 5.1 only supports up to opset-9 but your model is opset-10.

onnx-tensorrt/builtin_op_importers.cpp

// TensorRT 5.1 only supports up to opset 9.
ASSERT(ctx->getOpsetVersion() < 10, ErrorCode::kUNSUPPORTED_NODE);

I have manually remove the version check assertion and be able to convert your model into TensorRT engine.

diff --git a/builtin_op_importers.cpp b/builtin_op_importers.cpp
index 655976d..657fd18 100644
--- a/builtin_op_importers.cpp
+++ b/builtin_op_importers.cpp
@@ -547,7 +547,7 @@ DEFINE_BUILTIN_OP_IMPORTER(Atanh)
 
 DEFINE_BUILTIN_OP_IMPORTER(AveragePool) {
   // TensorRT 5.1 only supports up to opset 9.
-  ASSERT(ctx->getOpsetVersion() < 10, ErrorCode::kUNSUPPORTED_NODE);
+// ASSERT(ctx->getOpsetVersion() < 10, ErrorCode::kUNSUPPORTED_NODE);
   nvinfer1::ITensor* tensor_ptr = &convertToTensor(inputs.at(0), ctx);
   nvinfer1::Dims dims = tensor_ptr->getDimensions();
 #if NV_TENSORRT_MAJOR >= 4
@@ -925,7 +925,7 @@ DEFINE_BUILTIN_OP_IMPORTER(Div) {
 
 DEFINE_BUILTIN_OP_IMPORTER(Dropout) {
   // TensorRT 5.1 only supports up to opset 9.
-  ASSERT(ctx->getOpsetVersion() < 10, ErrorCode::kUNSUPPORTED_NODE);
+// ASSERT(ctx->getOpsetVersion() < 10, ErrorCode::kUNSUPPORTED_NODE);
   int noutputs = node.output().size();
   if (noutputs == 1)
   {
@@ -1225,7 +1225,7 @@ DEFINE_BUILTIN_OP_IMPORTER(Max) {
 
 DEFINE_BUILTIN_OP_IMPORTER(MaxPool) {
   // TensorRT 5.1 only supports up to opset 9.
-  ASSERT(ctx->getOpsetVersion() < 10, ErrorCode::kUNSUPPORTED_NODE);
+// ASSERT(ctx->getOpsetVersion() < 10, ErrorCode::kUNSUPPORTED_NODE);
   nvinfer1::ITensor* tensor_ptr = &convertToTensor(inputs.at(0), ctx);
   nvinfer1::Dims dims = tensor_ptr->getDimensions();
   ASSERT(dims.nbDims >= 2, ErrorCode::kINVALID_NODE);
@@ -1653,7 +1653,7 @@ DEFINE_BUILTIN_OP_IMPORTER(Size) {
 
 DEFINE_BUILTIN_OP_IMPORTER(Slice) {
   // TensorRT 5.1 only supports up to opset 9.
-  ASSERT(ctx->getOpsetVersion() < 10, ErrorCode::kUNSUPPORTED_NODE);
+// ASSERT(ctx->getOpsetVersion() < 10, ErrorCode::kUNSUPPORTED_NODE);
   nvinfer1::ITensor& tensor = convertToTensor(inputs.at(0), ctx);
   OnnxAttrs attrs(node);
   const auto starts = attrs.get<std::vector<int64_t>>("starts");
@@ -1880,7 +1880,7 @@ DEFINE_BUILTIN_OP_IMPORTER(ThresholdedRelu) {
 #if NV_TENSORRT_MAJOR >= 4
 DEFINE_BUILTIN_OP_IMPORTER(TopK) {
   // TensorRT 5.1 only supports up to opset 9.
-  ASSERT(ctx->getOpsetVersion() < 10, ErrorCode::kUNSUPPORTED_NODE);
+// ASSERT(ctx->getOpsetVersion() < 10, ErrorCode::kUNSUPPORTED_NODE);
   nvinfer1::ITensor& tensor = convertToTensor(inputs.at(0), ctx);
   ASSERT(tensor.getType() != nvinfer1::DataType::kINT32,
          ErrorCode::kUNSUPPORTED_NODE);

However, it’s still recommended to use lower opset version or wait for our TensorRT v6.0 release.

Thanks.

Hi,

Don’t understand meaning of opset, could you explain it?

I have removed the Opset check also, but still cannot convert our ONNX models to TRT engine

In old version JetPack, some ONNX layers are not supported; in current version Jetpack, all of the ONNX models can not be converted. This situation blocks us move forward

When will the JetPack with TensorRT v6.0 be released?

Hi,

Your model can be converted into TensorRT in our environment without issue.

Have you rebuilt the onnx-trt parser after applying the change?
Please remember to re-execute these command:

$ cd [onnx-tensorrt]/build
$ cmake .. -DTENSORRT_ROOT=/usr/src/tensorrt -DGPU_ARCHS="53" -DCUDA_INCLUDE_DIRS=/usr/local/cuda-10.0/include
$ make
$ sudo make install

The opset version is chosen when converting a model into onnx.
Suppose you can convert your model from PaddlePaddle into onnx opset-9 directly.

JetPack4.3 do support TensorRT v6.0 but for Xavier only.

Thanks.

Hi,
I have rebuilt the parser, the models can be converted to trt models, I am trying our models

Waiting for JetPack with TensorRT6.0 version

Thanks,

Hi,

Thanks for the update.
Will let you know once the TensorRT6.0 is available for the Jetson Nano.

Thanks.