IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2])

Description

While parse the attached onnx model the following error is raised:

[shuffleNode.cpp::nvinfer1::builder::ShuffleNode::symbolicExecute::387] Error Code 4: Internal Error (Reshape_77: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2])

Environment

TensorRT Version: 8.2.2.1
GPU Type: Quadro RTX 3000
Nvidia Driver Version: 471.11
CUDA Version: 11.2
CUDNN Version: 8.1.1
Operating System + Version: Window 10
Python Version (if applicable): 3.6.8
TensorFlow Version (if applicable): NA
PyTorch Version (if applicable): NA
Baremetal or Container (if container which image + tag): Baremetal

Relevant Files

grid_sample.onnx (8.6 KB)

Steps To Reproduce

builder = trt.Builder(self.logger)
networkFlags = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
network = builder.create_network(self.networkFlags)
parser = trt.OnnxParser(network, Logger())
# Open the model in binary format - set its data into buffer
modelFD = open(‘grid_sample.onnx’, ‘rb’)
modelBuffer = modelFD.read()
# Check and determine whether or not an Onnx model is compatible with TRT
supportsModelInfo = parser.supports_model(modelBuffer)

1 Like

Hi,
Please refer to the below link for Sample guide.

Refer to the installation steps from the link if in case you are missing on anything

However suggested approach is to use TRT NGC containers to avoid any system dependency related issues.

In order to run python sample, make sure TRT python packages are installed while using NGC container.
/opt/tensorrt/python/python_setup.sh

In case, if you are trying to run custom model, please share your model and script with us, so that we can assist you better.
Thanks!

Thanks,
My custom model is attached.

Hi,

I think padding related node is causing error, we don’t support 2D shape tensors yet. We can try workaround constant-fold with polygraphy. After this we are able to successfully generate engine. Please try

polygraphy surgeon sanitize --fold-constants grid_sample.onnx -o 2001833/folded.onnx

For more details,

Thank you.

1 Like

Thanks for your support.
I used the polygraphy as you described above and I successfully generate the TRT engine.

1 Like

Hello there @spolisetty, @orong13, @NVES

I am getting the exact same error as I am trying to convert my swin instance segmentation ONNX model to tensorrt using trtexec on my Jeston AGX Xavier! However, I am very unsure on how to use this function. I have TensorRT (8.2.1.8) installed natively on the Jetson with Jetpack 4.6.2. But where can I find this polygraphy function?

I found trtexec under /usr/src/tensorrt/bin so I’m guessing it should be somewhere similar… I know this might be a very beginners question.

If you’d like to upload my ONNX model for you to check, but it’s too big :(

I would appreciate your help a lot!

Hello,
Feel free to share your model and I will do the same test I did for my model and update you with the results.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.