An attention OCR model has been trained using TensorFlow on about 4 lakh images to read the number plate in car images. We are getting checkpoints for this particular model. How does one inference using checkpoints on jetson nano? How can one optimize this model using tensorrt?
Environment
TensorRT Version: GPU Type: Nvidia Driver Version: CUDA Version: 10.2 CUDNN Version: Operating System + Version: Windows Python Version (if applicable): Python 3.6 TensorFlow Version (if applicable): 1.15 PyTorch Version (if applicable): Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
We have successfully converted to .pb file but are unable to ONXX and UFF file format.
For UFF format, we are getting error described below:
ValueError: cannot create an OBJECT array from memory buffer
And while conversion to ONXX, I am getting ValueError: Node ‘cond/ExpandDims’ has an _output_shapes attribute inconsistent with the GraphDef for output #0: Shapes must be equal rank, but are 1 and 0.
import graphsurgeon as gs
import tensorflow as tf #import tensorrt as trt
import uff
if name == “main”:
output_nodes = [“prediction”]
input_node = “input_image_as_bytes”
graph_pb = “” ##please enter the path to the tensorflow graph shared yesterday
END USER DEFINED VARIABLES
dynamic_graph = gs.DynamicGraph(graph_pb)
convert to UFF
uff_model = uff.from_tensorflow(dynamic_graph.as_graph_def(),
output_nodes=output_nodes)
print(“converted to UFF”)
I have also trained Attention OCR on my dataset using Tensorflow 1.15 version. When I tried to convert to uff and then to tensorRT, I am getting some errors related to unsupported layers. Layers like: Fill, split, AddV2, FusedBatchNormV2 and some other 2 layers. How do you make it happen to support these layers if I want to port it to TensorRT.?? Please let me know the possible solution to get it work in TensorRT. Thank you
Yes, I trained it using the same repo. I got the output and everything works perfectly. I converted to UFF. It got converted too. But as I mentioned above, there are some unsupported layers that are not supported in TensorRT. So I could not port it to TensorRT without those layers. I need to write custom functions for them. But I do not know how to write.
With the deprecation of UFF parser in TRT 7, I encourage using tf2onnx and the ONNX parser instead, as more ops should be supported by default, and plugin support should be coming soon.