INT8 colibration Inference

Why does my Jetson Orin Nano 4GB crash when running the gen_calibration_cache.py code with the YOLOv8s model? The memory usage is very high, and if I use FP16, the inference process crashes. What should I do?

Hi,

Do you run out of memory?
Are you able to share the error log with us so we can give it a check?

Thanks.

“The issue is resolved. It was due to setting the work memory size too large during model inference. Thank you!”

Thanks for the update.

Good to know the issue is solved.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.