Description
Trying to run an onnx model in tensorRT, followed the “simpleOnnx” example:
But when I’m trying to create the cuda engine like this:
engine.reset(createCudaEngine(onnxModelPath, batchSize));
if (!engine)
return 1;
The program crashes with this error:
ERROR: Repeated layer name: while/MatMul_1 (layers must have distinct names)
ERROR: Network validation failed.
Environment
TensorRT Version: 7.0.0-1
GPU Type: RTX
Nvidia Driver Version:
CUDA Version: 11.0
CUDNN Version:
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.6
TensorFlow Version (if applicable): 2.3
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Details
Saw this topic:
But I’m using TensorRT 7.0.0
and didn’t understand what to do in the graphsurgeon? it will be better if you can elaborate
Update:
I had two lstm layers in my network, like this:
lstm1 = LSTM(128)(AB)
drop1 = Dropout(0.25)(lstm1)
lstm2 = LSTM(128)(drop1)
And then the error replaced with different error that I will open another topic for it:
ERROR: StatefulPartitionedCall/functional_5/lstm_1/PartitionedCall/while_loop:7: region stride in units of elements overflows int32_t. Region dimensions are [2147483647,(# 0 (SHAPE x1:0)),128].
But I guess that the problem is related to the fact that I had two LSTM Layers