Issues with torch.nn.ReflectionPad2d(padding) conversion to TRT engine

Description

I have a PyTorch model which is using torch.nn.ReflectionPad2d(padding) in one of it’s layer. I converted the model to ONNX model and that’s working fine but when I am trying to convert the ONNX model to TensorRT model it’s getting stuck because of this padding. Can I anyone tell me how to solve this issue?

Command from terminal: trtexec --onnx=inference_models/rrdb.onnx --saveEngine=inference_models/rrdb.trt --explicitBatch

Error:
[09/24/2021-01:50:25] [W] [TRT] onnx2trt_utils.cpp:362: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[09/24/2021-01:50:25] [W] [TRT] onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
[09/24/2021-01:50:25] [E] [TRT] ModelImporter.cpp:720: While parsing node number 14 [Pad → “796”]:
[09/24/2021-01:50:25] [E] [TRT] ModelImporter.cpp:721: — Begin node —
[09/24/2021-01:50:25] [E] [TRT] ModelImporter.cpp:722: input: “773”
input: “795”
output: “796”
name: “Pad_14”
op_type: “Pad”
attribute {
name: “mode”
s: “reflect”
type: STRING
}

[09/24/2021-01:50:25] [E] [TRT] ModelImporter.cpp:723: — End node —
[09/24/2021-01:50:25] [E] [TRT] ModelImporter.cpp:725: ERROR: builtin_op_importers.cpp:2984 In function importPad:
[8] Assertion failed: inputs.at(1).is_weights() && “The input pads is required to be an initializer.”
[09/24/2021-01:50:25] [E] Failed to parse onnx file
[09/24/2021-01:50:25] [I] Finish parsing network model
[09/24/2021-01:50:25] [E] Parsing model failed
[09/24/2021-01:50:25] [E] Engine creation failed
[09/24/2021-01:50:25] [E] Engine set up failed

Environment

TensorRT Version: 8.0.1.6
GPU Type: GeForce GTX 860M
Nvidia Driver Version: 470.63.01
CUDA Version: 11.4
CUDNN Version: 8.2.2
Operating System + Version: Ubuntu 20.4
Python Version (if applicable): 3.8.10
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.9.0+cu111
Baremetal or Container (if container which image + tag):

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#onnx-export

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hello,

This link: https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec is giving ERROR 404. And the check_model.py is also working fine. I have got desired output from the ONNX model.



|
Build software better, together
GitHub is where people build software. More than 65 million people use GitHub to discover, fork, and contribute to over 200 million projects.
github.com
|

  • | - |

After running the verbose command this is what I get:

[W] [TRT] onnx2trt_utils.cpp:362: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[W] [TRT] onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
[E] [TRT] ModelImporter.cpp:720: While parsing node number 14 [Pad → “796”]:
[E] [TRT] ModelImporter.cpp:721: — Begin node —
[E] [TRT] ModelImporter.cpp:722: input: “773”
input: “795”
output: “796”
name: “Pad_14”
op_type: “Pad”
attribute {
name: “mode”
s: “reflect”
type: STRING
}

[E] [TRT] ModelImporter.cpp:723: — End node —
[E] [TRT] ModelImporter.cpp:725: ERROR: builtin_op_importers.cpp:2984 In function importPad:
[8] Assertion failed: inputs.at(1).is_weights() && “The input pads is required to be an initializer.”
[E] Failed to parse onnx file
[E] Parsing model failed
[E] Engine creation failed
[E] Engine set up failed

Regards,
Chashi

Here is a link to a sample code where trtexec is not working for reflection padding:
https://drive.google.com/drive/folders/10l2sQplVTvBfBMte7Fq7JxKHIBINaG2r?usp=sharing

Just run the cnn_model_loader.py file you will see the error.

Hi,

Could you please provide access to the model to try from our end.

Thank you.

Here’s the link:

https://drive.google.com/drive/folders/10l2sQplVTvBfBMte7Fq7JxKHIBINaG2r?usp=sharing

Hi,

Sorry for not being clear previously, looks like you’re using reflection padding which is not supported, TensorRT can only support constant zero padding for now.

This will be fixed in next release. Please check more details here,

Thank you.

When is the next release date?

Hi,

TensorRT 8.2 EA is available now.
Please checkout https://developer.nvidia.com/nvidia-tensorrt-8x-download to download the latest version.

Upgrading TensorRT

Thank you.

Thank you so much! Is it available in the TensorRT docker images yet? Right now I am using Nvidia Release 21.08 where TensorRT is 8.0.1

Hi @ci20l,

Not yet available as docker image. Will be released soon. Maybe you can also try upgrading inside the container.

Thank you.

Hello, is the docker image released yet? I tried upgrading it inside the container but I am having some issues with CuDNN. I will be really grateful if you provide me with the docker image link when released. Thank you.

Hi,

As mentioned previously, TensorRT latest version NGC container is not available yet.
Please checkout TensorRT support matrix and make sure dependencies are satisfied correctly.
https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html

Thank you.

I created a docker image following the readme file from this link:

But when I run trtexec command inside the container it says
-bash: trtexec: command not found

What can be the issue?

Hi,

Please check following. You may need to export the path.

Thank you.

My usr/src doesn’t contain any tensorrt folder. How should I build the image to have it like this? Screen Shot 2021-10-25 at 10.07.20 PM

Hi,

Please make sure you’re following steps as per doc shared previously.

You can remove existing installation and try again.

Thank you.