Error while Parsing ONNX model - TensorRT 5

Hi

I have successfully converted an MXNet model to ONNX. While, parsing it in TensorRT 5RC, I am facing following errors:

Initially, the error was:

Input filename: …/models/model.onnx
ONNX IR version: 0.0.3
Opset version: 8
Producer name:
Producer version:
Domain:
Model version: 0
Doc string:

ERROR: /home/erisuser/p4sw/sw/gpgpu/MachineLearning/DIT/release/5.0/parsers/onnxOpenSource/ModelImporter.cpp:98 In function importInputs:
[8] Assertion failed: convert_onnx_weights(initializer, &weights)
ERROR: failed to parse onnx file
However after reading the docs that TensorRT 5.0 RC supports ONNX IR (Intermediate Representation) version 0.0.3, opeset version 7, I converted ONNX model to Opset 7 with following guide https://github.com/onnx/onnx/blob/master/docs/PythonAPIOverview.md#converting-version-of-an-onnx-model-within-default-domain-aionnx

Secondly, as I followed the guideline given in Known Issues 3 and 4 (regarding protobuf and onnx parser libs), at TensorRT 5RC Release Notes, I performed following steps:

  1. Downloaded tar package of TensorRT 5RC and copied the libs libprotobuf.a libprotobuf-lite.a from /lib/ to /usr/lib/x86_64-linux-gnu
  2. Built and installed onnx-tensorrt, from source
    However even after this all, Error still persists but with fewer info now, as follows:
    ERROR: failed to parse onnx file

Following are my platform details:
Ubuntu 16.04
GPU 1080Ti
Nvidia driver 384.130
CUDA version 9.0.176
CUDNN version 7.1.2
G++ version 5.4.0
TensorRT package used nv-tensorrt-repo-ubuntu1604-cuda9.0-trt5.0.0.10-rc-20180906_1-1_amd64

Any pointers regarding this problem are welcome ??

Regards

Hello,

To help us debug, can you share a repro containing the onnx and code that exhibit the parsing errors you are seeing.

regards,
NVIDIA Enterprise Support

Hi NVES

Thanks for the response and apologies from my side, as I was busy with other work and could not share the reproducible at that time.

Here https://drive.google.com/open?id=1eHLmJk9FSLORPUnRNngOlVvyWbcdfeiq is the zip file link containing required files and model to reproduce the error.

Note:
I have updated a few libraries and current error is
[b]----------------------------------------------------------------
Input filename: …/models/model.onnx
ONNX IR version: 0.0.3
Opset version: 7
Producer name:
Producer version:
Domain:
Model version: 0
Doc string:

ERROR: ModelImporter.cpp:117 In function importInputs:
[8] Assertion failed: convert_onnx_weights(initializer, &weights)
ERROR: failed to parse onnx file
failed[/b]

Following are updated platform details:
Ubuntu 16.04
GPU 1080Ti
Nvidia driver 410.78
CUDA version 10.0.130
CUDNN version 7.3.1
G++ version 5.4.0
TensorRT version 5.1.2.2-1+cuda10.0
Onnx version 1.2.1

Hi

I have solved above error, it was due to version mismatch of MXNet. Above error got away as I reinstalled it with Cuda-10.

However, the ONNXParser is unable to parse my network. Following are the error logs:
[b]----------------------------------------------------------------
Input filename: …/models/model.onnx
ONNX IR version: 0.0.3
Opset version: 7
Producer name:
Producer version:
Domain:
Model version: 0
Doc string:

While parsing node number 5 [PRelu]:
ERROR: builtin_op_importers.cpp:1415 In function importPRelu:
[8] Assertion failed: weights.shape == scalar_shape
ERROR: failed to parse onnx file[/b]
Here https://drive.google.com/open?id=19WMc3uLI3mPZFQAVcQa5ybcmnR_ou3S7 is the zip file link containing required files and model to reproduce the error.

I am confused about the reason of this error. Is this due to

  1. No support of PRelu activation, by TensorRT 5.1.2 (as PRelu is not mentioned in the supported ops of ONNX at https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html)
  2. Mismatch of inputs required by the available Plugin for PRelu named kPRELU (as mentioned in at https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/c_api/namespacenvinfer1.html#afeb8e3947449a5adfec9a4f06324bad0)

Any pointer on this issue would be really helpful ! Thank you

Following are updated platform details:
Ubuntu 16.04
GPU 1080Ti
Nvidia driver 410.78
CUDA version 10.0.130
CUDNN version 7.3.1
G++ version 5.4.0
TensorRT version 5.1.2.2-1+cuda10.0
Onnx version 1.2.1
Mxnet version mxnet-cu100==1.4.0.post0

Hi,

I also got exactly the same error (using TensorRT 5.1.5.0 to parse an ONNX model):

While parsing node number 5 [PRelu]:
ERROR: builtin_op_importers.cpp:1415 In function importPRelu:
[8] Assertion failed: weights.shape == scalar_shape

And when I convert the model to Caffe and trying to parsing it with TensorRT, I got this:

could not parse layer type PReLU

I think the error comes from the PReLU activation. I noticed this while checking the “NvInferPlugin.h”:

//!
//! \brief The PReLu plugin layer performs leaky ReLU for 4D tensors. Give an input value x, the PReLU layer computes the output as x if x > 0
//!  and negative_slope //! x if x <= 0.
//! \param negSlope Negative_slope value.
//! \deprecated. This plugin is superseded by createLReLUPlugin()
//!
TENSORRTAPI INvPlugin* createPReLUPlugin(float negSlope);
TENSORRTAPI INvPlugin* createPReLUPlugin(const void* data, size_t length);

Which means that PReLU is replaced by LReLU. I think the parsers (both Caffe and ONNX) still use PReLU plugin, thus raise the error.

I’m going to check this out with the help of GitHub - NVIDIA/TensorRT: TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
It will be a great thanks if NVIDIA solve this for us so we can focus more on our models and applications.

Hello, did you solve the problem later

Hi everyone

The issue has rightly been pointed by @dluyangulei i.e. PReLU has been implemented just like LReLU in Onnx parser

The problem has been resolved in versions of TensorRT >= 6

For TensorRT 5, you can build Onnx parser with correct implementation of PReLU, to successfully parse it