IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [1,2]

Description

While parse the attached onnx model the following error is raised:

`[02/06/2022-07:28:24] [E] Error[4]: [shuffleNode.cpp::symbolicExecute::387] Error Code 4: $nternal Error (Unsqueeze_15: IShuffleLayer applied to shape tensor must have 0 or 1 reshap$ dimensions: dimensions were [1,2])
[02/06/2022-07:28:24] [E] [TRT] parsers/onnx/ModelImporter.cpp:780: While parsing node num$er 35 [Pad → “338”]:
[02/06/2022-07:28:24] [E] [TRT] parsers/onnx/ModelImporter.cpp:781: — Begin node —
[02/06/2022-07:28:24] [E] [TRT] parsers/onnx/ModelImporter.cpp:782: input: “281”
input: “336”
input: “337”
output: “338”
name: “Pad_53”
op_type: “Pad”
attribute {
name: “mode”
s: “constant”
type: STRING
}

[02/06/2022-07:28:24] [E] [TRT] parsers/onnx/ModelImporter.cpp:783: — End node —
[02/06/2022-07:28:24] [E] [TRT] parsers/onnx/ModelImporter.cpp:785: ERROR: parsers/onnx/Mod
elImporter.cpp:179 In function parseGraph:
[6] Invalid Node - Pad_53`

Environment

TensorRT Version: 8.2.2.1
GPU Type: T4
Nvidia Driver Version: 470.57.02
CUDA Version: 11.6
Operating System + Version: Ubuntu20.4
Baremetal or Container (if container which image + tag): tensorrt:22.01-py3

Relevant Files

fasterrcnn_d2.onnx

Steps To Reproduce

I processed the onnx model with fold-constants, referring to topic

import onnx
import onnx_graphsurgeon as gs

input_onnx_model="fasterrcnn_d2.onnx"
graph = gs.import_onnx(onnx.load(input_onnx_model))
graph.fold_constants().cleanup()
onnx.save(gs.export_onnx(graph), input_onnx_model)

Generate TRT
./trtexec --onnx=/home/fasterrcnn_d2.onnx -saveEngine=/home/fasterrcnn_d2.trt

Hi,

We are facing different error when we try with shared ONNX model.
Could you please share with us complete error verbose logs with --verbose option in trtexec command.

Thank you.

Thanks. I have re-run the following command and added logs to the trtexec_error_verbose.log (53.9 KB).
./trtexec --onnx=/home/fasterrcnn_d2.onnx --saveEngine=/home/fasterrcnn_d2.trt --verbose

Hi @18646313696,

Is this model generated using Pytorch ? If yes which version are you using to export onnx model.

Thank you.

The process of exporting the onnx model is a bit tortuous.The general environment and steps are as follows.

System environment:
python: 3.7
pytorch: 1.8.1
detectron2: 0.6
ubuntun: 20.0.4

step:

  1. Export the onnx model with the detectron2/tool/deploy/export_model.py tool .
  2. Modify the model structure with onnx_graphsurgeon.

There are a few modification details.

  1. Fix the repeat_interleave operator problem.
    pytorch pull https://github.com/pytorch/pytorch/pull/52855 code. And modify the code the repeat_interleave() function in the torch/onnx/symbolic_opset9.py file
  final_splits = list()
- r_splits = sym_help._repeat_interleave_split_helper(g, repeats, reps, 0)
- i_splits = sym_help._repeat_interleave_split_helper(g, input, reps, dim)
+ r_splits = [sym_help._repeat_interleave_split_helper(g, repeats, reps, 0)]
+ i_splits = [sym_help._repeat_interleave_split_helper(g, input, reps, dim)]
  input_sizes[dim], input_sizes_temp[dim] = -1, 1
  1. Export the model with opset_version=12. Modify detectron2/tool/deploy/export_model.py.
elif args.format == "onnx":
    with PathManager.open(os.path.join(args.output, "model.onnx"), "wb") as f:
        input_names = ["input0"]
        output_names = ["output0"]
        dynamic_axes = {}
        torch.onnx.export(traceable_model, 
                          (image,), 
                          f,
                          input_names=input_names,
                          output_names=output_names,
                          dynamic_axes=dynamic_axes,
                          opset_version=12,
                          verbose=True)

run command:
python export_model.py --format onnx --export-method tracing --config-file ./configs/COCO-Detection/faster_rcnn_R_50_C4_1x.yaml --output ./export/output --opts MODEL.WEIGHTS ./model_final_721ade. pkl
success exported model.onnx.
3. Fix ERROR

[01/29/2022-20:14:38] [E] [TRT] ModelImporter.cpp:779: ERROR: input0:231 In function importInput:

Modify the model structure with onnx_graphsurgeon, the code is as follows

from onnx import version_converter, helper
import numpy as np
import onnx_graphsurgeon as gs

input_onnx_model="./model.onnx"
output_onnx_model="fasterrcnn_d2.onnx"
model = onnx.load(input_onnx_model)
graph = gs.import_onnx(onnx.load(input_onnx_model))
for inp in graph.inputs:
    inp.dtype = np.float32

sub_node=None
for node in graph.nodes:
    if node.name == "Sub_3":
        sub_node=node
        break
inp_node = sub_node.i()
sub_node.inputs=graph.inputs+[i for i in sub_node.inputs if i.name=="278"]

graph.fold_constants().cleanup()
onnx.save(gs.export_onnx(graph), output_onnx_model)

Finally export fasterrcnn_d2.onnx.

Hi,

There is a known issue causing this error while parsing onnx model.
Could you please upgrade Pytorch to latest version (1.10) (as they might have better onnx graph optimization) and try exporting model and try building the TensorRT engine again.

Thank you.

thanks, I will try upgrade Pytorch to latest version and timely feedback with results.

Hello, I still meet the same problem as “[shuffleNode.cpp::symbolicExecute::387] Error Code 4: Internal Error (Reshape_75: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2])” . Since which is the known problem like what you said. Is there any useful ways to solve this ploblem ? Many Many thanks .

Were you able to solve this problem?