TrOCR model running slow on Jetson Nano

Hi, I’m using TrOCR on a Nvidia Jetson Nano, and I’m noticing that it’s taking a considerable amount of time for inference, causing the system to perform slowly. Is there a way to improve this?

import torch
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
from PIL import Image

processor = TrOCRProcessor.from_pretrained('microsoft/trocr-large-printed')
model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-large-printed')

def perform_ocr(image_path):
    image ="RGB")

    pixel_values = processor(images=image, return_tensors="pt")
    generated_ids = model.generate(pixel_values)
    generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]

    return generated_text


This subforum is certainly not about TrOCR. I generally recommend people asking pytorch questions to ask those on a pytorch forum, such as That forum has nvidia experts that patrol it from time to time. Another place where you may get better help would be on one of the Jetson forums that matches your device.