TrOCR model running slow on Jetson Nano

Hi, I’m using TrOCR on a Nvidia Jetson Nano, and I’m noticing that it’s taking a considerable amount of time for inference, causing the system to perform slowly. Is there a way to improve this?

import torch
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
from PIL import Image

processor = TrOCRProcessor.from_pretrained('microsoft/trocr-large-printed')
model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-large-printed')

def perform_ocr(image_path):
    image = Image.open(image_path).convert("RGB")

    pixel_values = processor(images=image, return_tensors="pt").pixel_values.to(device)
    generated_ids = model.generate(pixel_values)
    generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]

    return generated_text

Thanks

This subforum is certainly not about TrOCR. I generally recommend people asking pytorch questions to ask those on a pytorch forum, such as discuss.pytorch.org. That forum has nvidia experts that patrol it from time to time. Another place where you may get better help would be on one of the Jetson forums that matches your device.