Originally published at: https://developer.nvidia.com/blog/robust-scene-text-detection-and-recognition-inference-optimization/
In this post, we delve deeper into the inference optimization process to improve the performance and efficiency of our machine learning models during the inference stage. We discuss the techniques employed, such as inference computation graph simplification, quantization, and lowering precision. We also showcase the benchmarking results of our scene text detection and recognition models,…
Have you checked the tensor difference between the original and converted model outputs? I think it’s crucial not only to compare the inference speed but also to ensure that the accuracy degradation is acceptable.
when i convert to onnx, its result not same with
parseq = torch.hub.load(‘baudm/parseq’, ‘parseq’, pretrained=True).eval()
Can you share your python backend code for the parseq model? @jwitsoe