Description
I want to convert my pytorch model to TensorRT format, first I convert to ONNX model, and the ONNX model works with ORT, then I tried to use trtexec to convert onnx model to TensorRT format, but failed with :
[12/22/2021-17:44:03] [E] Error[9]: [graph.cpp::computeInputExecutionUses::549] Error Code 9: Internal Error (Exp_641: IUnaryLayer cannot be used to compute a shape tensor)
[12/22/2021-17:44:03] [E] [TRT] ModelImporter.cpp:773: While parsing node number 664 [ConstantOfShape → “1127”]:
[12/22/2021-17:44:03] [E] [TRT] ModelImporter.cpp:774: — Begin node —
[12/22/2021-17:44:03] [E] [TRT] ModelImporter.cpp:775: input: “1126”
output: “1127”
name: “ConstantOfShape_664”
op_type: “ConstantOfShape”
attribute {
name: “value”
t {
dims: 1
data_type: 1
raw_data: “\000\000\000\000”
}
type: TENSOR
}
[12/22/2021-17:44:03] [E] [TRT] ModelImporter.cpp:776: — End node —
[12/22/2021-17:44:03] [E] [TRT] ModelImporter.cpp:779: ERROR: ModelImporter.cpp:179 In function parseGraph:
[6] Invalid Node - ConstantOfShape_664
[graph.cpp::computeInputExecutionUses::549] Error Code 9: Internal Error (Exp_641: IUnaryLayer cannot be used to compute a shape tensor)
The corresponding pytorch code is:
duration = (torch.exp(log_duration_prediction) - 1) * d_control
I see the TensorRT documents the exp op is supported, so I don’t know how to solve this issue.
Environment
TensorRT Version: 8.2.1.8
GPU Type: P40
Nvidia Driver Version: 470.82.01
CUDA Version: 11.4
CUDNN Version: 8.2.4
Operating System + Version: CentOS 7 3.10.0-1160.45.1.el7.x86_64
Python Version (if applicable): 3.7
TensorFlow Version (if applicable): NA
PyTorch Version (if applicable): 1.9
Baremetal or Container (if container which image + tag):