Unable to Convert YoloV11 model into INT8 in Jetson Orin Nano using TRT-CPP APIs

Dear Community,

I was trying to convert Yolov11n-seg model into INT8 precision using TRT-CPP for getting better inference speed. I was using this GitHub repository for understanding the process.

I have also looked for INT8 conversion using TRT-CPP in GitHub and found nothing. Majority is using FP16. Though they claim better inference time, I want to use int8 for my use case.
The below are my Specs:
HW:Jetson Orin Nano Dev Kit
JP:6.2 Super Dev
CUDA:12.6
TRT:10.3.0.30
OpenCV with CUDA: 4.12.0-dev

it would be very helpful if you share your insight on this.
I am also attaching the SS of the error I got during INT8 conversion using the above git repo.

Thank you

Regards
Kiranraj K R

Hi,

It’s a third-party source so please check with the GitHub owner for more information.

However, Ultralytics can support YOLOv11 directly.
You can run it on JetPack 6.2 with the command shared below:

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.