How to run tesseract ocr on inference engine

Hardware used : jetson nano 2gb
Programming lang: python on VS code

We are running tesseract- OCR version 4. on live video stream.The output runs at less than 1 fps. We want to run tesseract on inference engine to increase the speed. Is this possible? if yes how?


It depends on the layer used in the tesseract’s model.
You can find the TensorRT support matrix in the following document:

And here is the performance benchmark for Jetson.
You can check if the speed-up can meet your requirement or not first: