How to optimise attention OCR model for jetson nano using tensor-rt


An attention OCR model has been trained using TensorFlow on about 4 lakh images to read the number plate in car images. We are getting checkpoints for this particular model. How does one inference using checkpoints on jetson nano? How can one optimize this model using tensorrt?


TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version:
Operating System + Version: Windows
Python Version (if applicable): Python 3.6
TensorFlow Version (if applicable): 1.15
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

You can convert .pb->ONNX-> TRT, please refer below links:

You can also user trtexec command to generate TRT file:

Importing ONNX in TRT:

You can refer to below samples:

We have successfully converted to .pb file but are unable to ONXX and UFF file format.
For UFF format, we are getting error described below:
ValueError: cannot create an OBJECT array from memory buffer
And while conversion to ONXX, I am getting ValueError: Node ‘cond/ExpandDims’ has an _output_shapes attribute inconsistent with the GraphDef for output #0: Shapes must be equal rank, but are 1 and 0.

Please guide.

Could you please share the script and model file to reproduce the issue so we can help better?


yes sure.


Script for conversion:

import graphsurgeon as gs
import tensorflow as tf
#import tensorrt as trt
import uff
if name == “main”:
output_nodes = [“prediction”]
input_node = “input_image_as_bytes”
graph_pb = “” ##please enter the path to the tensorflow graph shared yesterday


dynamic_graph = gs.DynamicGraph(graph_pb)

convert to UFF

uff_model = uff.from_tensorflow(dynamic_graph.as_graph_def(),
print(“converted to UFF”)

Could you please share the model file as well to reproduce the issue?


Yes sure, Here’s the link for the model file.

I have also trained Attention OCR on my dataset using Tensorflow 1.15 version. When I tried to convert to uff and then to tensorRT, I am getting some errors related to unsupported layers. Layers like: Fill, split, AddV2, FusedBatchNormV2 and some other 2 layers. How do you make it happen to support these layers if I want to port it to TensorRT.?? Please let me know the possible solution to get it work in TensorRT. Thank you

How did you convert the model to UFF?? Did you get any errors or warnings related to unsupported layers??

Hey! Did your model get successfully converted to UFF and getting error related to unsupported layers on conversion to tensorrt?
Also did you train using this model (
Thank you!

Yes, I trained it using the same repo. I got the output and everything works perfectly. I converted to UFF. It got converted too. But as I mentioned above, there are some unsupported layers that are not supported in TensorRT. So I could not port it to TensorRT without those layers. I need to write custom functions for them. But I do not know how to write.

With the deprecation of UFF parser in TRT 7, I encourage using tf2onnx and the ONNX parser instead, as more ops should be supported by default, and plugin support should be coming soon.


Can you share the reference how you converted it to UFF? Which script you used?

Also, did you convert from the checkpoints or the frozen graph?

Did you convert the model to UFF without any problems?? Did not get any unsupported layers??

Have you tried converting Attention OCR to onnx and then to TensorRT?? It worked??

I have converted in both ways. using checkpoint folder. And also using .pb file. How did you convert it??

Can you please share your conversion script from checkpoints and your .pb model file. It would be of great help!
Thank you.

def graph_conversion():

checkpoint_path = 'datasets/data/number_plate/model_checkpoints_50k'

freeze_graph.freeze_graph(checkpoint_path + '/graph.pbtxt', '', False,
							checkpoint_path + '/model.ckpt-299490', 'AttentionOcr_v1/predicted_chars',
							'save/restore_all', 'save/Const:0',
							checkpoint_path + '/' + 'frozen_graph.pb', False, '')

print('Frozen graph is created in : ',checkpoint_path, '--> frozen_graph.pb')

output_nodes = ['AttentionOcr_v1/predicted_chars']
uff_model = uff.from_tensorflow_frozen_model(checkpoint_path + '/frozen_graph.pb', output_nodes,  output_filename = 'datasets/data/number_plate/model_checkpoints_50k/uff_model_from_frozen.uff')
print('graph conversion is : ', uff_model)
return uff_model 

from checkpoints:

def uff_conversion():

checkpoint = tf.train.get_checkpoint_state('datasets/data/number_plate/model_checkpoints_50k')
input_checkpoint = checkpoint.model_checkpoint_path

saver = tf.train.import_meta_graph(input_checkpoint +'.meta', clear_devices=True)

graph = tf.get_default_graph()
input_graph_def = graph.as_graph_def()

output_nodes_names = ['AttentionOcr_v1/predicted_chars']
#input_graph_def = 'datasets/data/model_checkpoints/frozen.pb'

with tf.Session(graph=graph) as sess:
	saver.restore(sess, input_checkpoint)
	frozen_graph = tf.graph_util.convert_variables_to_constants(sess, input_graph_def, output_nodes_names)
	frozen_graph = tf.graph_util.remove_training_nodes(frozen_graph)
	uff_model = uff.from_tensorflow(frozen_graph, output_nodes_names, output_filename = 'datasets/data/number_plate/model_checkpoints_50k/uff_model_from_checkpoints.uff')
return uff_model

For any unsupported layer, you need to create a custom plugin.
Please refer to TRT samples:

This model files seems to have some issue, could you please resend the .pb file using successfully got converted to UFF model?