Tensor volume exceeds (2^31)-1

I trained a Tacotron2 model refrence from GitHub - Rayhane-mamah/Tacotron-2: DeepMind's Tacotron-2 Tensorflow implementation.
I convert onnx sucessfully and the result is correct. Hower a large tensor error occured when convert onnx into trt.

env:
trt: 8.2.3
cuda: 10.2
cudnn: 7.6.5
onnx: 1.10.0
opset: 13

steps:

  • convert ckpt to onnx
  • simplifier onnx
  • convert onnx into trt

[02/14/2022-07:28:27] [E] [TRT] ModelImporter.cpp:776: — End node —
[02/14/2022-07:28:27] [E] [TRT] ModelImporter.cpp:779: ERROR: ModelImporter.cpp:166 In function parseGraph:
[6] Invalid Node - generic_loop_Loop__183
[graphShapeAnalyzer.cpp::processCheck::581] Error Code 4: Internal Error ((Unnamed Layer* 755) [LoopOutput]_output: tensor volume exceeds (2^31)-1, dimensions are [2147483647,1,160])
[graphShapeAnalyzer.cpp::processCheck::581] Error Code 4: Internal Error ((Unnamed Layer* 755) [LoopOutput]_output: tensor volume exceeds (2^31)-1, dimensions are [2147483647,1,160])
[02/14/2022-07:28:34] [E] Failed to parse onnx file
[02/14/2022-07:28:34] [I] Finish parsing network model
[02/14/2022-07:28:34] [E] Parsing model failed
[02/14/2022-07:28:34] [E] Failed to create engine from model.
[02/14/2022-07:28:34] [E] Engine set up failed

I have moved the topic to TensorRT - This team may be in a better position to help.

Hi,
We recommend you to check the below samples links in case of tf-trt integration issues.

If issue persist, We recommend you to reach out to Tensorflow forum.
Thanks!

Hi,

Are you using tf2onnx to generate ONNX file?

Hi,
I am also facing the same error and I have used tf2onnx to convert
Any help would be appreciated!

Hi,

Currently, TRT does not support tensors with more than 2^31-1 elements.
We do not have a workaround except modifiying the network.

Thank you.

Hello,
Can you clarify please why this limitation exist?
what its root cause to limit the tensor volume size and raise a runtime exception while performing the model inference operation while there are still available GPU RAM?
Why not enable the inference operation till there will not be enough available RAM memory?
Thanks,

Why is this the limit? When will it be increased?