Single BatchNorm layer onnx model get wrong result in TensorRT

I trained a Unet model by Pytorch for segmentation, and export corresbonding onnx model using torch.onnx. Then I do inference with this onnx model under TensorRT Libary (C++). The program worked well without error, but the result is not correct. I heard from other people that TensorRT do not support Batchnorm layer (my model have this layer), so I do a trail.I just export a single batchnorm layer model from pytorch (torch.nn.Batchnorm), and export the corresbonding onnx model of this single bn model.With the same input, the pytorch format model (.pth) in python and its onnx’format (.onnx) in C++ (with tensorRT) got different result, So I suspect that the reason is the batchnorm layer is not supported by TensorRT.
I want to ask, will this batchnorm layer be supported in future version of TensorRT, and How can I solve this problem in current stage like using other layer instead or just using PluginV2. And Can you give me a list of supported layer in TensorRT, Thanks a lot!!

I haven’t tried yet but hoping for the best result so far, https://errorcode0x.com/error-code-0x80070002/ helped me to know more about this.

Met nearly the same problem.I try to convert a pytorch model to tensorrt but does not find the batchnorm operation in the offical documentation.It looks like tensorrt does not support batchnorm now.Not quite sure if it can be implemented by scalar operation.

And if you want to know which layers are supported you can check the documents https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/index.html.But I think there are not many.Even the basic leaky relu or batchnorm or group convolution are not included.

If someone know how to do batchnorm in tensorrt please inform me that would be a great help.

Hello,
How are you converting your model to TensorRT? Are you using TensorRT 5?
With TensorRT 4 I use https://github.com/onnx/onnx-tensorrt which converts the batch norm layer to a scale layer. Unfortunately this also does not seem to work correctly as I have significantly different values.

I have the same problem in response to another user’s bug https://f2help.com/brother-printer-repair/ integration of TensorFlow with TensorRT to speed up deep learning inference using GPUs.