In Deepstream on Jetson from fp32 to int8 failed

I have Jetson Orin Nano developer kit 8gb. I have custom Yolov8 trained model best_yolov8n_640.pt. I deployed yolov8 with tensorrt and deepstream sdk. I folowed this tutorial: Deploy YOLOv8 with TensorRT and DeepStream SDK | Seeed Studio Wiki. It worked with FP32. but when i wanted to run with int8 I got this error:
File does not exist: /home/asilbek/PycharmProjects/DeepStream/DeepStream-Yolo/calib.table
OpenCV is required to run INT8 calibrator

deepstream-app: yolo.cpp:98: nvinfer1::ICudaEngine* Yolo::createEngine(nvinfer1::IBuilder*, nvinfer1::IBuilderConfig*): Assertion `0’ failed.
Aborted (core dumped)
I created calibration directory with 2000 images and calibration.txt file with path of this 2000 images.
export INT8_CALIB_IMG_PATH=calibration.txt
export INT8_CALIB_BATCH_SIZE=1
I did all of tutorials, getting this error. Can you help me ?

If you want to use int8 format, you need to provide the calib.table file in the corresponding directory.

It automatically have to generate this calib.table. I gave directories with images. But it didn’t generate and gave this error

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Please provide complete information as applicable to your setup. Thanks
Hardware Platform (Jetson / GPU)
DeepStream Version
JetPack Version (valid for Jetson only)
TensorRT Version
NVIDIA GPU Driver Version (valid for GPU only)
Issue Type( questions, new requirements, bugs)
How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.