I have successfully tested text detection using easyocr with pytorch model on jetson nano with jetpack4.6.1. But its not as fast as i need.
But as per the
It utilizes tensorrt model to leverage max gpu from the jetson nano, but the problem jetson nano is not able to convert the model from pytorch to tensorrt when i use “use_trt” flag for conversion, i think its not capable to handling this much big task. What could I do for the conversion if i dont have any powerful machine for that task .