Hi devs,
I’m getting a error parsing a onnx model. The custom model was designed in pytorch, then it was successfully exported to onnx format. Now, I’m trying to use it in TensorRT.
The error comes from a specific layer:
- Pytorch layer:
torch.nn.BatchNorm1d(1000)
- ONNX exporter provides:
%61 : Tensor = onnx::Unsqueezeaxes=[2], scope: Model/Sequential[classifier]/BatchNorm1d[1]
%62 : Tensor = onnx::BatchNormalization[epsilon=1e-05, momentum=0.9](%61, %classifier.1.weight, %classifier.1.bias, %classifier.1.running_mean, %classifier.1.running_var), scope: Model/Sequential[classifier]/BatchNorm1d[1]
%63 : Float(1, 1000) = onnx::Squeezeaxes=[2], scope: Model/Sequential[classifier]/BatchNorm1d[1]
- TensorRT error in parseFromFile function:
While parsing node number 30 [Unsqueeze]:
ERROR: builtin_op_importers.cpp:1987 In function importUnsqueeze:
[8] Assertion failed: get_shape_size(layer->getOutput(0)->getDimensions()) == get_shape_size(old_shape)
[E] Failure while parsing ONNX file
&&&& FAILED TensorRT.sample_onnx_mnist # ./sample_onnx_mnist
sample_onnx_mnist: sampleOnnxMNIST.cpp:245: int main(int, char**): Assertion `trtModelStream != nullptr’ failed.
Aborted (core dumped)
I’m using Pytorch 1.1.0, onnx 1.5.0 and TensorRT 5.1.
Any help??
I have the same problem! Did you manage to figure it out?
Hi filip_can
I didn’t found nice solution! but I’m doing the following. For training, I use such layer and for production I replace the layer for a custom layer in which the batch normalization formula is coded.
This was the only choice that I found to use my model in TensorRT.
I hope this helps you.
Hi jgarciac!
thank for your idea.
I also come across this error, could your show some information about the custom layer ? for pytorch or tensorrt? tensorrt do not support torch.div().
This is the custom layer for Pytorch, and it is successfully exported by onnx and parsed by TensorRT:
import torch
from torch import nn
class BatchNorm1DCell(nn.Module):
def __init__(self):
super(BatchNorm1DCell, self).__init__()
def forward(self, x, bn_mean, bn_var, bn_weight, bn_bias):
y = (((x-bn_mean)/torch.sqrt(bn_var))*bn_weight)+bn_bias
return y
Hi jgarciac,
Thanks for your custom BatchNorm layer. Can I ask how you get (bn_mean, bn_var, bn_weight, bn_bias) to pass down the forward function?
Hi Soncaoai,
There exist many ways to do that. For instance, you can use Opencv to save and load data using yml files…
mean = bn.running_mean.data.cpu().numpy()
var = bn.running_var.data.cpu().numpy()
weight = bn.weight.data.cpu().numpy()
bias = bn.bias.data.cpu().numpy()
cv_file = cv2.FileStorage("bn-data.yml", cv2.FILE_STORAGE_WRITE)
cv_file.write("mean", mean)
cv_file.write("var", var)
cv_file.write("weight", weight)
cv_file.write("bias", bias)
cv_file.release()
Then, you can load the file in your C/C++ app.
Good luck!