Deepstream yolov5 int8 fail

Hello,

I am trying to build an int8 optimized model for the yolov5s.onnx.
Results of optimization using Deepstream-yolo,
WARNING: INT8 calibration file not specified. Trying FP16 mode.
After the log appears, Building the int8 engine model fails but building the fp16 model succeeds.

How can I solve this?

environment
Orin NX 8GB + Devkit Jetpack 5.1.2
pytorch: torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl
torchvision: 0.16.1
tensorrt: 8.5.2.2-1+cuda11.4
CUDA Driver Version / Runtime Version: 11.4 / 11.4
CUDA Capability Major/Minor version number: 8.7

config_infer_primary_yoloV5.txt (824 Bytes)
deepstream_app_config.txt (932 Bytes)
deepstream-yolo log.txt (4.0 KB)
calibration.txt (65.4 KB)
I attached the files. so please check the attached file.

You are missing the correct int8 calibration table. The calibration.txt you provided is just a list of pictures.

you can get more information from the following link, which contains methods on how to generate quantification tables.

I solved the issue by adding OPENCV=1 option.
$ CUDA_VER=11.4 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo

Thank you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.