Description
I converted an Pytorch model in ONNX. Then apply : polygraphy surgeon sanitize --fold-constant model.onnx -o folding_model.onnx
During the inference with TensorRT, the parsing fails :
[07/07/2022-16:02:26] [TRT] [V] Pad_183 [Pad] inputs: [input → (1, 3, 720, 1280)[FLOAT]], [onnx::Pad_440 → (8)[INT32]], [onnx::Pad_441 → ()[FLOAT]],
[07/07/2022-16:02:26] [TRT] [V] Registering layer: Pad_183 for ONNX node: Pad_183
[07/07/2022-16:02:26] [TRT] [E] [shuffleNode.cpp::symbolicExecute::392] Error Code 4: Internal Error (Reshape_172: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2])
The model :
output = F.pad(
input,
pad=( int(leftPad), int(rightPad), int(topPad), int(bottomPad) ),
mode="constant",
value=0.0 )
where :
input is 4D torch.Tensor : (1, C, H, W),
leftPad, rightPad, topPad, bottomPad are torch.Tensors with 1 element, these values are not constant.
Environment
TensorRT Version : 8.4.1.5
GPU Type :
Nvidia Driver Version :
CUDA Version : 11.3
CUDNN Version : 8.2
Operating System + Version :
Python Version (if applicable) : 3.8
TensorFlow Version (if applicable) :
PyTorch Version (if applicable) : 1.12.0
Baremetal or Container (if container which image + tag) :
NVES
July 7, 2022, 3:37pm
2
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!
Hi,
Following similar post may help you.
Hi,
I think padding related node is causing error, we don’t support 2D shape tensors yet. We can try workaround constant-fold with polygraphy. After this we are able to successfully generate engine. Please try
polygraphy surgeon sanitize --fold-constants grid_sample.onnx -o 2001833/folded.onnx
For more details,
Thank you.
Thank you.
test.onnx (4.2 KB)
test_polygraphy.onnx (2.5 KB)
You can find linked the onnx models:
test.onnx before polygraphy surgeon sanitize --fold-constants
test_polygraphy.onnx after.
I validated the model with onnx.checker.check_model(model, full_check=True) .
I tried to simplify the code :
import torch
import torch.nn as nn
import torch.nn.functional as F
class Test( nn.Module ):
def __init__( self ):
super( Test, self ).__init__()
def forward( self, input ):
shape = input.size()
X = torch.tensor(1.)
zero = torch.tensor( (0) )
pad = torch.stack( (X - shape[2] + 1 , zero) )
pad = torch.max( pad )
out = F.pad(
input,
( int(pad), int(pad), int(pad), int(pad) ),
"constant",
0.0
)
return out
The inference is also not working with
class Test( nn.Module ):
def __init__( self ):
super( Test, self ).__init__()
def forward( self, input ):
shape = input.size()
X = torch.tensor(1.)
pad = max( int(X - shape[2] + 1), 0. )
out = F.pad(
input,
( int(pad), int(pad), int(pad), int(pad) ),
"constant",
0.0
)
return out
Hi, I have aready done this operation and I still have the same code error.
Thanks for your help !
Hi,
We could reproduce the error. N-D shape tensor has not been supported yet.
This is a known issue, we are working on a fix for this issue.
This issue will be fixed in a future release.
Thank you.
Hi, thanks for your answer.
I don’t understand why the model is working without the different max functions. The operation is not supported due to the previous function ?
Hi,
Looks like you’re using a very old version of the TensorRT, we recommend you to please try on the latest TensorRT version and let us know if you still face this issue.
Thank you.
I am using the version 8.4.1.5
Hi,
Yes, as mentioned earlier this is an N-D shape tensor issue, which is not supported in the current version of the TensorRT.
Thank you.