TRT Error Repeated tensor name: AttentionOcr_v1/sequence_logit_fn/SQLR/LSTM/attention_decoder/lstm_cell/split_1

Hi,

I tried to accelerate tensorflow attentionocr recently, it has several layers unsupported, so I written plug in and register in tensorrt namespace. other layers works fine, but the split plug in causes error.

TRT Error Repeated tensor name: AttentionOcr_v1/sequence_logit_fn/SQLR/LSTM/attention_decoder/lstm_cell/split_1

But I check the pb and uff, there is no node with same name split_1.

Could you help me what the problem is and how to solve the problem. Thanks.

Hi,

Could you please provide details on the platforms you are using:
o Linux distro and version
o GPU type
o Nvidia driver version
o CUDA version
o CUDNN version
o Python version [if using python]
o Tensorflow and PyTorch version
o TensorRT version
If possible, please share the script & model file along with the error log to reproduce the issue.

Thanks

Thanks for your reply.

We use k2200, driver 440, Cuda10.1, cudnn 7.6.2, python version 3.6, Tensorflow version 1.14.0, TensorRT version 6.0.1.5

One more question is that can tf.nn.rnn_cell.LSTMCell convert to uff and the trt speed up?

Hi,

UFF parser doesn’t seems to have support for LSTM layer
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/uff/Operators.html

Can you try tf-> ONNX → TRT model flow? ONNX parser supports LSTM layer:

Also, could you please share the script and model file so we can help better?

Thanks

Thank you for your reply.
I place the pb file, plug in file, model configuraton file in this link. code:8a67

8a67

I break the LSTM up into several parts and write the plug in of unsopprted op such as clip_by_value/split. In the basic LSTM cell, there is 10 split, when running the first split in trt, there is no problem. However, running the second split it comes out the error:
TRT Error Repeated tensor name: AttentionOcr_v1/sequence_logit_fn/SQLR/LSTM/attention_decoder/lstm_cell/split_1

Looking forward to your responce. Thanks

Hi,

I am not able to access the zip file, could you please upload it in forum itself?
Meanwhile, could you please check if all tensors have distinct names after 10 splits? This error normally occurs when you have same tensor names repeated in the model.

Thanks

Hi SunilJB,

I use the different names like this.
split_0= gs.create_plugin_node(name = “AttentionOcr_v1/sequence_logit_fn/SQLR/LSTM/attention_decoder/lstm_cell/split”, op=“Split_TRT” , num_split = 4 , dim_split = 1)
split_11= gs.create_plugin_node(name = “AttentionOcr_v1/sequence_logit_fn/SQLR/LSTM/attention_decoder/lstm_cell/split_1”, op=“Split_TRT”, num_split = 4, dim_split = 1 )
split_12= gs.create_plugin_node(name = “AttentionOcr_v1/sequence_logit_fn/SQLR/LSTM/attention_decoder/lstm_cell/split_2”, op=“Split_TRT”, num_split = 4, dim_split = 1 )
split_13= gs.create_plugin_node(name = “AttentionOcr_v1/sequence_logit_fn/SQLR/LSTM/attention_decoder/lstm_cell/split_3”, op=“Split_TRT”, num_split = 4, dim_split = 1 )
split_14= gs.create_plugin_node(name = “AttentionOcr_v1/sequence_logit_fn/SQLR/LSTM/attention_decoder/lstm_cell/split_4”, op=“Split_TRT”, num_split = 4, dim_split = 1 )
split_15= gs.create_plugin_node(name = “AttentionOcr_v1/sequence_logit_fn/SQLR/LSTM/attention_decoder/lstm_cell/split_5”, op=“Split_TRT”, num_split = 4, dim_split = 1 )
split_16= gs.create_plugin_node(name = “AttentionOcr_v1/sequence_logit_fn/SQLR/LSTM/attention_decoder/lstm_cell/split_6”, op=“Split_TRT”, num_split = 4, dim_split = 1 )
split_17= gs.create_plugin_node(name = “AttentionOcr_v1/sequence_logit_fn/SQLR/LSTM/attention_decoder/lstm_cell/split_7”, op=“Split_TRT”, num_split = 4, dim_split = 1 )
split_18= gs.create_plugin_node(name = “AttentionOcr_v1/sequence_logit_fn/SQLR/LSTM/attention_decoder/lstm_cell/split_8”, op=“Split_TRT”, num_split = 4, dim_split = 1 )
split_19= gs.create_plugin_node(name = “AttentionOcr_v1/sequence_logit_fn/SQLR/LSTM/attention_decoder/lstm_cell/split_9”, op=“Split_TRT”, num_split = 4, dim_split = 1 )
I put the file in the google drive and the link is here, hopes this time you can access it.
https://drive.google.com/drive/folders/14YDSLOUfqE8uXvQoFCfXXNiYQGjV3t7C?usp=sharing

Thanks

We are deprecating Caffe Parser and UFF Parser in TensorRT 7, will recommend you to try using ONNX workflow and let us know if issue presist.

Thanks

I tried the tf->onnx, then trt infer, it works now. Thanks

1 Like

Hi, Could you please provide me a tensorflow attention ocr model to onnx conversion script ?? I am facing the conversion issue with unsupported layers. I want to run inference on jetson nano using tensorrt. Could you provide me the plugin file and conversion scripts. it would be helpful.
Thank you

Hi @SunilJB,

Any addition plugins need for the conversion??

Could you provide me the conversion script for it??