IConvolutionLayer cannot be used to compute a shape tensor

Description

[01/05/2023-11:31:51] [V] [TRT] Parsing node: Range_492 [Range]
[01/05/2023-11:31:51] [V] [TRT] Searching for input: 1023
[01/05/2023-11:31:51] [V] [TRT] Searching for input: 1022
[01/05/2023-11:31:51] [V] [TRT] Searching for input: 1024
[01/05/2023-11:31:51] [V] [TRT] Range_492 [Range] inputs: [1023 → ()[INT32]], [1022 → ()[INT32]], [1024 → ()[INT32]],
[01/05/2023-11:31:51] [V] [TRT] Registering layer: Range_492 for ONNX node: Range_492
[01/05/2023-11:31:51] [E] Error[9]: [graph.cpp::computeInputExecutionUses::553] Error Code 9: Internal Error (Conv_344: IConvolutionLayer cannot be used to compute a shape tensor)
[01/05/2023-11:31:51] [E] [TRT] parsers/onnx/ModelImporter.cpp:773: While parsing node number 391 [Range → “1025”]:
[01/05/2023-11:31:51] [E] [TRT] parsers/onnx/ModelImporter.cpp:774: — Begin node —
[01/05/2023-11:31:51] [E] [TRT] parsers/onnx/ModelImporter.cpp:775: input: “1023”
input: “1022”
input: “1024”
output: “1025”
name: “Range_492”
op_type: “Range”

[01/05/2023-11:31:51] [E] [TRT] parsers/onnx/ModelImporter.cpp:776: — End node —
[01/05/2023-11:31:51] [E] [TRT] parsers/onnx/ModelImporter.cpp:778: ERROR: parsers/onnx/ModelImporter.cpp:180 In function parseGraph:
[6] Invalid Node - Range_492
[graph.cpp::computeInputExecutionUses::553] Error Code 9: Internal Error (Conv_344: IConvolutionLayer cannot be used to compute a shape tensor)
[01/05/2023-11:31:51] [E] Failed to parse onnx file
[01/05/2023-11:31:51] [I] Finish parsing network model
[01/05/2023-11:31:51] [E] Parsing model failed
[01/05/2023-11:31:51] [E] Failed to create engine from model or file.
[01/05/2023-11:31:51] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8402] # trtexec --onnx=fs_folded2.onnx --saveEngine=fs.trt --fp16 --verbose

Environment

TensorRT Version: 8.4.2-1+cuda11.6
GPU Type: A100
Nvidia Driver Version: 465.19.01
CUDA Version: 11.3
CUDNN Version:
Operating System + Version: tensorrt:22.08-py3
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Steps To Reproduce

trtexec --onnx=fs_folded2.onnx --saveEngine=fs.trt --fp16 --verbose

Hi,

When we try using the latest TensorRT version 8.5.2, we are facing a different error.
Please allow us some time to debug this issue.

[01/06/2023-12:03:50] [E] Error[2]: [constantNode.cpp::checkSanity::19] Error Code 2: Internal Error (Assertion phylum(params.weights.type()) == phylum(*outputs[0]) failed. weights and output must be in same phylum)
[01/06/2023-12:03:50] [E] Error[2]: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
[01/06/2023-12:03:50] [E] Engine could not be created from network

Thank you.

Any update? thanks.

My local test shows that the provided model cannot run with onnxruntime.

polygraphy run ./fs_folded2.onnx --onnxrt
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Where node. Name:'Where_64' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:583 onnxruntime::Broadcaster::Broadcaster(gsl::span<const long int>, gsl::span<const long int>) largest <= 1 was false. Can broadcast 0 by 0 or 1. 116 is invalid.

Seems the input shape is necessary for a correct inference. Can you provide a valid shape and input data for the graph inputs?

    {texts [dtype=int64, shape=(1, 116)],
     src_lens.1 [dtype=int64, shape=(1,)],
     max_src_len [dtype=int64, shape=()]}