Please provide complete information as applicable to your setup.
• Hardware Platform: Jetson Orin Nano
• DeepStream Version: 6.3
• JetPack Version: 5.1.2
• TensorRT Version: 8.5.2.2
Hi, I have been trying to inference YOLOv8s with INT8 calibration on the Jetson Orin Nano. I followed this tutorial https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/docs/YOLOv8.md. When I try the int8 section, I am unable to run and it gives me this attached error:
I use pretrained-YOLOv8s from Ultralytics and I can convert .pt to .onnx successfully. The yolov8.onnx and config_infer_primary_yoloV8_int8.txt are attached for investigation below:
config_infer_primary_yoloV8_int8.txt (680 Bytes)
yolov8s_onnx.zip (36.1 MB)
My questions are:
-
Do I need to use INT8 calibration script provided by TensorRT or DeepStream to create calib.table or calibration.txt before running deepstream-app -c deepstream_app_config.txt ? (If I need to do please provide me how to generate calib.table or calibration.txt in Python)
-
Do you have any practical solution in this issue ?