Int8 YOLOv8s on Jetson Orin Nano issue with DeepStream 6.3

Please provide complete information as applicable to your setup.

• Hardware Platform: Jetson Orin Nano
• DeepStream Version: 6.3
• JetPack Version: 5.1.2
• TensorRT Version: 8.5.2.2

Hi, I have been trying to inference YOLOv8s with INT8 calibration on the Jetson Orin Nano. I followed this tutorial https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/docs/YOLOv8.md. When I try the int8 section, I am unable to run and it gives me this attached error:

I use pretrained-YOLOv8s from Ultralytics and I can convert .pt to .onnx successfully. The yolov8.onnx and config_infer_primary_yoloV8_int8.txt are attached for investigation below:

config_infer_primary_yoloV8_int8.txt (680 Bytes)

yolov8s_onnx.zip (36.1 MB)

My questions are:

  1. Do I need to use INT8 calibration script provided by TensorRT or DeepStream to create calib.table or calibration.txt before running deepstream-app -c deepstream_app_config.txt ? (If I need to do please provide me how to generate calib.table or calibration.txt in Python)

  2. Do you have any practical solution in this issue ?

  1. from the screenshot, the error is because custom-network-config and model-file are not set. for example,
    custom-network-config=yolov2.cfg
    model-file=yolov2.weights
    if having onnx, please set onnx-file like config_infer_primary_yoloV8.txt.
  2. please refer to INT8Calibration.md for how to create int8 calibration file.

Thanks for your answers

I change the config_infer_primary_yoloV8_int8.txt based on your recommendations:
config_infer_primary_yoloV8_int8.txt (772 Bytes)

It can build YOLOv8 but fail to generate TensorRT file (.engine) as shown in this error:

It seems that calib.table is not created based on
INT8Calibration.md

In DeepStream app (/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector), I see cal_trt.bin, this file is set in config_infer_primary.txt to inference with INT8 calibration:
config_infer_primary.txt (4.0 KB)

That why I guess that calib.table or cal_trt.bin are an important file to inference with INT8, how do I create this file ?

My Environment
Python: 3.8.10
CUDA: 11.4.315
cuDNN: 8.6.0.166
TensorRT: 8.5.2.2
OpenCV: 4.5.4 without CUDA

please refer to all steps in INT8Calibration.md for how to generate calibration file.
especially exporting OPENCV=1 on terminal is needed.