I use pretrained-YOLOv8s from Ultralytics and I can convert .pt to .onnx successfully. The yolov8.onnx and config_infer_primary_yoloV8_int8.txt are attached for investigation below:
Do I need to use INT8 calibration script provided by TensorRT or DeepStream to create calib.table or calibration.txt before running deepstream-app -c deepstream_app_config.txt ? (If I need to do please provide me how to generate calib.table or calibration.txt in Python)
Do you have any practical solution in this issue ?
from the screenshot, the error is because custom-network-config and model-file are not set. for example,
custom-network-config=yolov2.cfg
model-file=yolov2.weights
if having onnx, please set onnx-file like config_infer_primary_yoloV8.txt.
please refer to INT8Calibration.md for how to create int8 calibration file.
In DeepStream app (/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector), I see cal_trt.bin, this file is set in config_infer_primary.txt to inference with INT8 calibration: config_infer_primary.txt (4.0 KB)
That why I guess that calib.table or cal_trt.bin are an important file to inference with INT8, how do I create this file ?
My Environment
Python: 3.8.10
CUDA: 11.4.315
cuDNN: 8.6.0.166
TensorRT: 8.5.2.2
OpenCV: 4.5.4 without CUDA
There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.
please refer to all steps in INT8Calibration.md for how to generate calibration file.
especially exporting OPENCV=1 on terminal is needed.