Issues with torch.nn.ReflectionPad2d(padding) conversion to TRT engine

Description

I have a PyTorch model which is using torch.nn.ReflectionPad2d(padding) in one of it’s layer. I converted the model to ONNX model and that’s working fine but when I am trying to convert the ONNX model to TensorRT model it’s getting stuck because of this padding. Can I anyone tell me how to solve this issue?

Command from terminal: trtexec --onnx=inference_models/rrdb.onnx --saveEngine=inference_models/rrdb.trt --explicitBatch

Error:
[09/24/2021-01:50:25] [W] [TRT] onnx2trt_utils.cpp:362: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[09/24/2021-01:50:25] [W] [TRT] onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
[09/24/2021-01:50:25] [E] [TRT] ModelImporter.cpp:720: While parsing node number 14 [Pad → “796”]:
[09/24/2021-01:50:25] [E] [TRT] ModelImporter.cpp:721: — Begin node —
[09/24/2021-01:50:25] [E] [TRT] ModelImporter.cpp:722: input: “773”
input: “795”
output: “796”
name: “Pad_14”
op_type: “Pad”
attribute {
name: “mode”
s: “reflect”
type: STRING
}

[09/24/2021-01:50:25] [E] [TRT] ModelImporter.cpp:723: — End node —
[09/24/2021-01:50:25] [E] [TRT] ModelImporter.cpp:725: ERROR: builtin_op_importers.cpp:2984 In function importPad:
[8] Assertion failed: inputs.at(1).is_weights() && “The input pads is required to be an initializer.”
[09/24/2021-01:50:25] [E] Failed to parse onnx file
[09/24/2021-01:50:25] [I] Finish parsing network model
[09/24/2021-01:50:25] [E] Parsing model failed
[09/24/2021-01:50:25] [E] Engine creation failed
[09/24/2021-01:50:25] [E] Engine set up failed

Environment

TensorRT Version: 8.0.1.6
GPU Type: GeForce GTX 860M
Nvidia Driver Version: 470.63.01
CUDA Version: 11.4
CUDNN Version: 8.2.2
Operating System + Version: Ubuntu 20.4
Python Version (if applicable): 3.8.10
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.9.0+cu111
Baremetal or Container (if container which image + tag):

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hello,

This link: https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec is giving ERROR 404. And the check_model.py is also working fine. I have got desired output from the ONNX model.



|
Build software better, together
GitHub is where people build software. More than 65 million people use GitHub to discover, fork, and contribute to over 200 million projects.
github.com
|

  • | - |

After running the verbose command this is what I get:

[W] [TRT] onnx2trt_utils.cpp:362: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[W] [TRT] onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
[E] [TRT] ModelImporter.cpp:720: While parsing node number 14 [Pad → “796”]:
[E] [TRT] ModelImporter.cpp:721: — Begin node —
[E] [TRT] ModelImporter.cpp:722: input: “773”
input: “795”
output: “796”
name: “Pad_14”
op_type: “Pad”
attribute {
name: “mode”
s: “reflect”
type: STRING
}

[E] [TRT] ModelImporter.cpp:723: — End node —
[E] [TRT] ModelImporter.cpp:725: ERROR: builtin_op_importers.cpp:2984 In function importPad:
[8] Assertion failed: inputs.at(1).is_weights() && “The input pads is required to be an initializer.”
[E] Failed to parse onnx file
[E] Parsing model failed
[E] Engine creation failed
[E] Engine set up failed

Regards,
Chashi

Here is a link to a sample code where trtexec is not working for reflection padding:
https://drive.google.com/drive/folders/10l2sQplVTvBfBMte7Fq7JxKHIBINaG2r?usp=sharing

Just run the cnn_model_loader.py file you will see the error.

Hi,

Could you please provide access to the model to try from our end.

Thank you.

Here’s the link:

https://drive.google.com/drive/folders/10l2sQplVTvBfBMte7Fq7JxKHIBINaG2r?usp=sharing

Hi,

Sorry for not being clear previously, looks like you’re using reflection padding which is not supported, TensorRT can only support constant zero padding for now.

This will be fixed in next release. Please check more details here,

Thank you.

When is the next release date?

Hi,

TensorRT 8.2 EA is available now.
Please checkout https://developer.nvidia.com/nvidia-tensorrt-8x-download to download the latest version.

Upgrading TensorRT

Thank you.

Thank you so much! Is it available in the TensorRT docker images yet? Right now I am using Nvidia Release 21.08 where TensorRT is 8.0.1

Hi @ci20l,

Not yet available as docker image. Will be released soon. Maybe you can also try upgrading inside the container.

Thank you.

Hello, is the docker image released yet? I tried upgrading it inside the container but I am having some issues with CuDNN. I will be really grateful if you provide me with the docker image link when released. Thank you.

Hi,

As mentioned previously, TensorRT latest version NGC container is not available yet.
Please checkout TensorRT support matrix and make sure dependencies are satisfied correctly.

Thank you.

I created a docker image following the readme file from this link:

But when I run trtexec command inside the container it says
-bash: trtexec: command not found

What can be the issue?

Hi,

Please check following. You may need to export the path.

Thank you.

My usr/src doesn’t contain any tensorrt folder. How should I build the image to have it like this? Screen Shot 2021-10-25 at 10.07.20 PM

Hi,

Please make sure you’re following steps as per doc shared previously.

You can remove existing installation and try again.

Thank you.

I was able to run the tensorrt 8.2.0.6 but the issue with reflection padding is still there. Or I can also be wrong. Can you please make sure?

Hi,

Could you please make sure the model is valid? Based on the comments in the git issue, it looks like the issue has been resolved in newer versions.

If you still face this issue, please share us trtexec --verbose logs and onnx model for better debugging.

Thank you.

This is the verbose output:

[11/03/2021-13:50:58] [V] [TRT] Parsing node: Conv_0 [Conv]
[11/03/2021-13:50:58] [V] [TRT] Searching for input: input
[11/03/2021-13:50:58] [V] [TRT] Searching for input: conv_first.weight
[11/03/2021-13:50:58] [V] [TRT] Searching for input: conv_first.bias
[11/03/2021-13:50:58] [V] [TRT] Conv_0 [Conv] inputs: [input → (1, 1, 200, 200)[FLOAT]], [conv_first.weight → (64, 1, 3, 3)[FLOAT]], [conv_first.bias → (64)[FLOAT]],
[11/03/2021-13:50:58] [V] [TRT] Convolution input dimensions: (1, 1, 200, 200)
[11/03/2021-13:50:58] [V] [TRT] Registering layer: Conv_0 for ONNX node: Conv_0
[11/03/2021-13:50:58] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64
[11/03/2021-13:50:58] [V] [TRT] Convolution output dimensions: (1, 64, 200, 200)
[11/03/2021-13:50:58] [V] [TRT] Registering tensor: 773 for ONNX tensor: 773
[11/03/2021-13:50:58] [V] [TRT] Conv_0 [Conv] outputs: [773 → (1, 64, 200, 200)[FLOAT]],
[11/03/2021-13:50:58] [V] [TRT] Parsing node: ConstantOfShape_1 [ConstantOfShape]
[11/03/2021-13:50:58] [V] [TRT] Searching for input: 10313
[11/03/2021-13:50:58] [V] [TRT] ConstantOfShape_1 [ConstantOfShape] inputs: [10313 → (1)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Registering layer: 10313 for ONNX node: 10313
[11/03/2021-13:50:58] [V] [TRT] Registering layer: ConstantOfShape_1 for ONNX node: ConstantOfShape_1
[11/03/2021-13:50:58] [V] [TRT] Registering tensor: 783 for ONNX tensor: 783
[11/03/2021-13:50:58] [V] [TRT] ConstantOfShape_1 [ConstantOfShape] outputs: [783 → (4)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Parsing node: Concat_2 [Concat]
[11/03/2021-13:50:58] [V] [TRT] Searching for input: 10314
[11/03/2021-13:50:58] [V] [TRT] Searching for input: 783
[11/03/2021-13:50:58] [V] [TRT] Concat_2 [Concat] inputs: [10314 → (4)[INT32]], [783 → (4)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Registering layer: 10314 for ONNX node: 10314
[11/03/2021-13:50:58] [V] [TRT] Registering layer: Concat_2 for ONNX node: Concat_2
[11/03/2021-13:50:58] [V] [TRT] Registering tensor: 784 for ONNX tensor: 784
[11/03/2021-13:50:58] [V] [TRT] Concat_2 [Concat] outputs: [784 → (8)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Parsing node: Constant_3 [Constant]
[11/03/2021-13:50:58] [V] [TRT] Constant_3 [Constant] inputs:
[11/03/2021-13:50:58] [V] [TRT] Constant_3 [Constant] outputs: [785 → (2)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Parsing node: Reshape_4 [Reshape]
[11/03/2021-13:50:58] [V] [TRT] Searching for input: 784
[11/03/2021-13:50:58] [V] [TRT] Searching for input: 785
[11/03/2021-13:50:58] [V] [TRT] Reshape_4 [Reshape] inputs: [784 → (8)[INT32]], [785 → (2)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Registering layer: Reshape_4 for ONNX node: Reshape_4
[11/03/2021-13:50:58] [V] [TRT] Registering tensor: 786 for ONNX tensor: 786
[11/03/2021-13:50:58] [V] [TRT] Reshape_4 [Reshape] outputs: [786 → (4, 2)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Parsing node: Constant_5 [Constant]
[11/03/2021-13:50:58] [V] [TRT] Constant_5 [Constant] inputs:
[11/03/2021-13:50:58] [V] [TRT] Constant_5 [Constant] outputs: [787 → (1)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Parsing node: Constant_6 [Constant]
[11/03/2021-13:50:58] [V] [TRT] Constant_6 [Constant] inputs:
[11/03/2021-13:50:58] [V] [TRT] Constant_6 [Constant] outputs: [788 → (1)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Parsing node: Constant_7 [Constant]
[11/03/2021-13:50:58] [V] [TRT] Constant_7 [Constant] inputs:
[11/03/2021-13:50:58] [V] [TRT] Weight at index 0: -9223372036854775807 is out of range. Clamping to: -2147483648
[11/03/2021-13:50:58] [W] [TRT] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[11/03/2021-13:50:58] [V] [TRT] Constant_7 [Constant] outputs: [789 → (1)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Parsing node: Constant_8 [Constant]
[11/03/2021-13:50:58] [V] [TRT] Constant_8 [Constant] inputs:
[11/03/2021-13:50:58] [V] [TRT] Constant_8 [Constant] outputs: [790 → (1)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Parsing node: Slice_9 [Slice]
[11/03/2021-13:50:58] [V] [TRT] Searching for input: 786
[11/03/2021-13:50:58] [V] [TRT] Searching for input: 788
[11/03/2021-13:50:58] [V] [TRT] Searching for input: 789
[11/03/2021-13:50:58] [V] [TRT] Searching for input: 787
[11/03/2021-13:50:58] [V] [TRT] Searching for input: 790
[11/03/2021-13:50:58] [V] [TRT] Slice_9 [Slice] inputs: [786 → (4, 2)[INT32]], [788 → (1)[INT32]], [789 → (1)[INT32]], [787 → (1)[INT32]], [790 → (1)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Registering layer: Slice_9 for ONNX node: Slice_9
[11/03/2021-13:50:58] [V] [TRT] Registering tensor: 791 for ONNX tensor: 791
[11/03/2021-13:50:58] [V] [TRT] Slice_9 [Slice] outputs: [791 → (4, 2)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Parsing node: Transpose_10 [Transpose]
[11/03/2021-13:50:58] [V] [TRT] Searching for input: 791
[11/03/2021-13:50:58] [V] [TRT] Transpose_10 [Transpose] inputs: [791 → (4, 2)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Registering layer: Transpose_10 for ONNX node: Transpose_10
[11/03/2021-13:50:58] [V] [TRT] Registering tensor: 792 for ONNX tensor: 792
[11/03/2021-13:50:58] [V] [TRT] Transpose_10 [Transpose] outputs: [792 → (2, 4)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Parsing node: Constant_11 [Constant]
[11/03/2021-13:50:58] [V] [TRT] Constant_11 [Constant] inputs:
[11/03/2021-13:50:58] [V] [TRT] Constant_11 [Constant] outputs: [793 → (1)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Parsing node: Reshape_12 [Reshape]
[11/03/2021-13:50:58] [V] [TRT] Searching for input: 792
[11/03/2021-13:50:58] [V] [TRT] Searching for input: 793
[11/03/2021-13:50:58] [V] [TRT] Reshape_12 [Reshape] inputs: [792 → (2, 4)[INT32]], [793 → (1)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Registering layer: Reshape_12 for ONNX node: Reshape_12
[11/03/2021-13:50:58] [V] [TRT] Registering tensor: 794 for ONNX tensor: 794
[11/03/2021-13:50:58] [V] [TRT] Reshape_12 [Reshape] outputs: [794 → (8)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Parsing node: Cast_13 [Cast]
[11/03/2021-13:50:58] [V] [TRT] Searching for input: 794
[11/03/2021-13:50:58] [V] [TRT] Cast_13 [Cast] inputs: [794 → (8)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Casting to type: int32
[11/03/2021-13:50:58] [V] [TRT] Registering layer: Cast_13 for ONNX node: Cast_13
[11/03/2021-13:50:58] [V] [TRT] Registering tensor: 795 for ONNX tensor: 795
[11/03/2021-13:50:58] [V] [TRT] Cast_13 [Cast] outputs: [795 → (8)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Parsing node: Pad_14 [Pad]
[11/03/2021-13:50:58] [V] [TRT] Searching for input: 773
[11/03/2021-13:50:58] [V] [TRT] Searching for input: 795
[11/03/2021-13:50:58] [V] [TRT] Pad_14 [Pad] inputs: [773 → (1, 64, 200, 200)[FLOAT]], [795 → (8)[INT32]],
[11/03/2021-13:50:58] [V] [TRT] Registering layer: Pad_14 for ONNX node: Pad_14
[11/03/2021-13:50:58] [E] Error[4]: [shuffleNode.cpp::symbolicExecute::387] Error Code 4: Internal Error (Reshape_4: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2])
[11/03/2021-13:50:58] [E] [TRT] ModelImporter.cpp:769: While parsing node number 14 [Pad → “796”]:
[11/03/2021-13:50:58] [E] [TRT] ModelImporter.cpp:770: — Begin node —
[11/03/2021-13:50:58] [E] [TRT] ModelImporter.cpp:771: input: “773”
input: “795”
output: “796”
name: “Pad_14”
op_type: “Pad”
attribute {
name: “mode”
s: “reflect”
type: STRING
}

[11/03/2021-13:50:58] [E] [TRT] ModelImporter.cpp:772: — End node —
[11/03/2021-13:50:58] [E] [TRT] ModelImporter.cpp:775: ERROR: ModelImporter.cpp:179 In function parseGraph:
[6] Invalid Node - Pad_14
[shuffleNode.cpp::symbolicExecute::387] Error Code 4: Internal Error (Reshape_4: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2])
[11/03/2021-13:50:58] [E] Failed to parse onnx file
[11/03/2021-13:50:58] [I] Finish parsing network model
[11/03/2021-13:50:58] [E] Parsing model failed
[11/03/2021-13:50:58] [E] Failed to create engine from model.
[11/03/2021-13:50:58] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8200] # /usr/src/tensorrt/bin/trtexec --onnx=rrdb_fp32_200.onnx --saveEngine=rrdb_fp32_200.trt --explicitBatch --verbose