Jetson Nano OCR easyocr

I have successfully tested text detection using easyocr with pytorch model on jetson nano with jetpack4.6.1. But its not as fast as i need.
But as per the

It utilizes tensorrt model to leverage max gpu from the jetson nano, but the problem jetson nano is not able to convert the model from pytorch to tensorrt when i use “use_trt” flag for conversion, i think its not capable to handling this much big task. What could I do for the conversion if i dont have any powerful machine for that task .

Hello,

Thanks for visiting the NVIDIA Developer forums! Your topic will be best served in the Jetson category.

I will move this post over for visibility.

Cheers,
Tom

Hi,

Have you maximized the device performance first?

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

Thanks.

Hi AastaLLL,

Yes I had done that already.

Thanks

Hi,

Please run tegrastats and check the GPU utilization.
If the GPU is in full load, this issue should be the computation limit of the Nano device.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.