Repeated layer name: while/MatMul_1 (layers must have distinct names)

I’m trying to convert a TF model (based on LSTM layers) to a TRT engine, but having some troubles. (Actually I tried with different models and reaching the same problem, so I think it’s not problem of the model)

The model I’m currently using is:

text_lstm_model.onnx (3.1 MB)

I’m using trtexec to build the engine. The error I’m getting is:

----- Parsing of ONNX …/text_lstm_model.onnx is Done ----
[07/28/2021-09:47:01] [E] [TRT] Repeated layer name: while/MatMul_1 (layers must have distinct names)
[07/28/2021-09:47:01] [E] [TRT] Network validation failed.
[07/28/2021-09:47:01] [E] Engine creation failed
[07/28/2021-09:47:01] [E] Engine set up failed

It seems that the parsing is OK, but the problem is when building the network. My guess is that there is some problem with the while loop. What should I do? Is there any trick to implement LSTM based models?


Could you check this comment first?


I already checked it, but I still don’t understand what I should do.

I’ve been doing some research about this and found the following:

I created the LSTM model in two different ways (model is the same):

1. Rolled LSTMs:


2. Unrolled LSTMs:


When converting the unrolled model to TensorRT, conversion is OK. However, when trying to convert the rolled model, I get the error in the first post ( Repeated layer name: while/MatMul_1 (layers must have distinct names))

It is possible to use unrolled LSTM model when sequence is short, which is not my case, so I need to use rolled LSTM model.

Error corresponds with “Loop” node of the ONNX graph, wich is represented as just one node. I don’t understand how I can solve this with GraphSurgeon as you proposed. How could I handle that “Loop” node?

Thanks in advance!!


May I know more details about your use case?

While is a non-supported layer in TensorRT, so you need to apply some modification.
The first alternative is to update the model with the graphsurgeon API.

Another approach is to unroll the while loop directly.
May I know why you need the rolled version for your use case?

Is the loop iteration a dynamic parameter in your use case?
Otherwise, it should be the same to unroll the model to the corresponding component.


The use case is predictive maintainance for an industrial machine, so I will be using temporal series. The input features will be so big that I guess using an unrolled model maybe is not correct. Anyway, I’ll try once I have the final model, I’m just doing some tests with a simplified model now.

Is there any guide for graphsurgeon API? I didn’t find anything clear.

Thanks in advance.