IUnaryLayer cannot be used to compute a shape tensor

Description

I want to convert my pytorch model to TensorRT format, first I convert to ONNX model, and the ONNX model works with ORT, then I tried to use trtexec to convert onnx model to TensorRT format, but failed with :

[12/22/2021-17:44:03] [E] Error[9]: [graph.cpp::computeInputExecutionUses::549] Error Code 9: Internal Error (Exp_641: IUnaryLayer cannot be used to compute a shape tensor)
[12/22/2021-17:44:03] [E] [TRT] ModelImporter.cpp:773: While parsing node number 664 [ConstantOfShape → “1127”]:
[12/22/2021-17:44:03] [E] [TRT] ModelImporter.cpp:774: — Begin node —
[12/22/2021-17:44:03] [E] [TRT] ModelImporter.cpp:775: input: “1126”
output: “1127”
name: “ConstantOfShape_664”
op_type: “ConstantOfShape”
attribute {
name: “value”
t {
dims: 1
data_type: 1
raw_data: “\000\000\000\000”
}
type: TENSOR
}

[12/22/2021-17:44:03] [E] [TRT] ModelImporter.cpp:776: — End node —
[12/22/2021-17:44:03] [E] [TRT] ModelImporter.cpp:779: ERROR: ModelImporter.cpp:179 In function parseGraph:
[6] Invalid Node - ConstantOfShape_664
[graph.cpp::computeInputExecutionUses::549] Error Code 9: Internal Error (Exp_641: IUnaryLayer cannot be used to compute a shape tensor)

The corresponding pytorch code is:
duration = (torch.exp(log_duration_prediction) - 1) * d_control

I see the TensorRT documents the exp op is supported, so I don’t know how to solve this issue.

Environment

TensorRT Version: 8.2.1.8
GPU Type: P40
Nvidia Driver Version: 470.82.01
CUDA Version: 11.4
CUDNN Version: 8.2.4
Operating System + Version: CentOS 7 3.10.0-1160.45.1.el7.x86_64
Python Version (if applicable): 3.7
TensorFlow Version (if applicable): NA
PyTorch Version (if applicable): 1.9
Baremetal or Container (if container which image + tag):

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

check_model does not report any error.

–verbose is already used in my log posted above.

1 Like

Hi,

Could you please share us issue repro onnx model for better debugging and provide fix.

Thank you.

I am also getting the same error, but likely due to a different model.

Here is a link to the onnx model and output from running trtexec --onnx=modified.onnx --verbose

check_model.py returns no errors.

Hello
I met the same problem like "IUnaryLayer cannot be used to compute a shape tensor " when I tried to get the shape value of ceil operation. I have tried may ways to change the realizations , but problem still alive . I wonder when this issue will be solved , next tensorrt release version or not sure ?

Many thanks

hi I am blocked by the same problem.
Has anyone figured out a solution, or workaround?