I have successfully converted an MXNet model to ONNX. While, parsing it in TensorRT 5RC, I am facing following errors:
Initially, the error was:
Input filename: …/models/model.onnx
ONNX IR version: 0.0.3
Opset version: 8
Producer name:
Producer version:
Domain:
Model version: 0
Doc string:
ERROR: /home/erisuser/p4sw/sw/gpgpu/MachineLearning/DIT/release/5.0/parsers/onnxOpenSource/ModelImporter.cpp:98 In function importInputs:
[8] Assertion failed: convert_onnx_weights(initializer, &weights)
ERROR: failed to parse onnx file
However after reading the docs that TensorRT 5.0 RC supports ONNX IR (Intermediate Representation) version 0.0.3, opeset version 7, I converted ONNX model to Opset 7 with following guide https://github.com/onnx/onnx/blob/master/docs/PythonAPIOverview.md#converting-version-of-an-onnx-model-within-default-domain-aionnx
Secondly, as I followed the guideline given in Known Issues 3 and 4 (regarding protobuf and onnx parser libs), at TensorRT 5RC Release Notes, I performed following steps:
Downloaded tar package of TensorRT 5RC and copied the libs libprotobuf.a libprotobuf-lite.a from /lib/ to /usr/lib/x86_64-linux-gnu
Built and installed onnx-tensorrt, from source
However even after this all, Error still persists but with fewer info now, as follows:
ERROR: failed to parse onnx file
Following are my platform details:
Ubuntu 16.04
GPU 1080Ti
Nvidia driver 384.130
CUDA version 9.0.176
CUDNN version 7.1.2
G++ version 5.4.0
TensorRT package used nv-tensorrt-repo-ubuntu1604-cuda9.0-trt5.0.0.10-rc-20180906_1-1_amd64
Any pointers regarding this problem are welcome ??
Note:
I have updated a few libraries and current error is
[b]----------------------------------------------------------------
Input filename: …/models/model.onnx
ONNX IR version: 0.0.3
Opset version: 7
Producer name:
Producer version:
Domain:
Model version: 0
Doc string:
ERROR: ModelImporter.cpp:117 In function importInputs:
[8] Assertion failed: convert_onnx_weights(initializer, &weights)
ERROR: failed to parse onnx file
failed[/b]
Following are updated platform details:
Ubuntu 16.04
GPU 1080Ti
Nvidia driver 410.78
CUDA version 10.0.130
CUDNN version 7.3.1
G++ version 5.4.0
TensorRT version 5.1.2.2-1+cuda10.0
Onnx version 1.2.1
I have solved above error, it was due to version mismatch of MXNet. Above error got away as I reinstalled it with Cuda-10.
However, the ONNXParser is unable to parse my network. Following are the error logs:
[b]----------------------------------------------------------------
Input filename: …/models/model.onnx
ONNX IR version: 0.0.3
Opset version: 7
Producer name:
Producer version:
Domain:
Model version: 0
Doc string:
While parsing node number 5 [PRelu]:
ERROR: builtin_op_importers.cpp:1415 In function importPRelu:
[8] Assertion failed: weights.shape == scalar_shape
ERROR: failed to parse onnx file[/b]
Here https://drive.google.com/open?id=19WMc3uLI3mPZFQAVcQa5ybcmnR_ou3S7 is the zip file link containing required files and model to reproduce the error.
I am confused about the reason of this error. Is this due to
Any pointer on this issue would be really helpful ! Thank you
Following are updated platform details:
Ubuntu 16.04
GPU 1080Ti
Nvidia driver 410.78
CUDA version 10.0.130
CUDNN version 7.3.1
G++ version 5.4.0
TensorRT version 5.1.2.2-1+cuda10.0
Onnx version 1.2.1
Mxnet version mxnet-cu100==1.4.0.post0
I also got exactly the same error (using TensorRT 5.1.5.0 to parse an ONNX model):
While parsing node number 5 [PRelu]:
ERROR: builtin_op_importers.cpp:1415 In function importPRelu:
[8] Assertion failed: weights.shape == scalar_shape
And when I convert the model to Caffe and trying to parsing it with TensorRT, I got this:
could not parse layer type PReLU
I think the error comes from the PReLU activation. I noticed this while checking the “NvInferPlugin.h”:
//!
//! \brief The PReLu plugin layer performs leaky ReLU for 4D tensors. Give an input value x, the PReLU layer computes the output as x if x > 0
//! and negative_slope //! x if x <= 0.
//! \param negSlope Negative_slope value.
//! \deprecated. This plugin is superseded by createLReLUPlugin()
//!
TENSORRTAPI INvPlugin* createPReLUPlugin(float negSlope);
TENSORRTAPI INvPlugin* createPReLUPlugin(const void* data, size_t length);
Which means that PReLU is replaced by LReLU. I think the parsers (both Caffe and ONNX) still use PReLU plugin, thus raise the error.