IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2]


I try conver a pytorch model to TensorRT, Pytorch->onnx->TensorRT.
I convert the model to onnx succefully, but when I try to convert onnx to TensorRT using trtexec, I got :
[6] Invalid Node - Pad_14
[shuffleNode.cpp::symbolicExecute::387] Error code 4: Internal Error (Reshape_3: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2]

I inspect the model by netron:

It seems the pytorch function F.pad get the incompatib node.
The first step in forward() is :

    h, w = x.shape[-2:]
    extra_h = (math.ceil(w / self.stride[1]) - 1) * self.stride[1] - w + self.kernel_size[1]
    extra_v = (math.ceil(h / self.stride[0]) - 1) * self.stride[0] - h + self.kernel_size[0]
    left = extra_h // 2
    right = extra_h - left
    top = extra_v // 2
    bottom = extra_v - top
    x = F.pad(x, [left, right, top, bottom])
    x = self.conv(x)

Is the problem due to F.pad() function or any other reason?


TensorRT Version:
GPU Type: RTX3060Ti
Nvidia Driver Version: 470.86
CUDA Version: 11.3
CUDNN Version:
Operating System + Version:
Python Version (if applicable): 3.7
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.10
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging


The following may help you.

efficientdet.onnx (27.4 MB)
This is my onnx model file.
I can run this by onnxruntime [python] successfully.


As mentioned in the above post please try the polygraphy tool.
polygraphy surgeon sanitize --fold-constants efficientdet.onnx -o folded.onnx

Thank you.

I try this command but get the same problem.
I inspect every step, found the problem is caused by pytorch.onnx.export opset_version.
I got this in opset_version=11:
Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.

This will happen when torch.nn.functional.pad is applied in forward(), polygraphy can not fix this.

Thanks for the confirmation. We will check more on this issue.

Looks like you marked your previous reply as a solution, do you still need help on this issue?
Thank you.

I don’t need this issue anymore. I think I found the problem and the trouble I meet is wait for another one to solve it, like pytorch group or polygraphy group.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.