Int8 yolov8n on Jetson agx orin issue with deepstream

Hi, I have been trying to deploy yolov8n on the Jetson agx Orin platform. I followed this tutorial here https://github.com/marcoslucianops/DeepStream-Yolo#yolov8-usage and was able to get fp32 and fp16 working. However, when I try the int8 section, I am unable to run and it gives me an engine build issue relating to some cuda engine creation failure. The log is attached below, and the config files i used with deepstream are attached as well. Any thoughts on how I might be able to solve this issue?

here are my config txt files:

/**********config_infer_primary_yoloV8_int8.txt
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=best.onnx
model-engine-file=model_b1_gpu0_int8.engine
int8-calib-file=calibration.txt
labelfile-path=labels.txt
batch-size=1
network-mode=1
num-detected-classes=3
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#force-implicit-batch-dim=1
#workspace-size=1000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

/*******************deepstream_app_config_int8.txt
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[tiled-display]
enable=1
rows=1
columns=1
width=640
height=640
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
type=3
uri=file:///home/sc/A.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0

[sink0]
enable=1
type=2
sync=0
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Consolas
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=640
height=640
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV8_int8.txt

[tests]
file-loop=0

Please provide complete information as applicable to your setup. Thanks
Hardware Platform (Jetson / GPU)
DeepStream Version
JetPack Version (valid for Jetson only)
TensorRT Version
NVIDIA GPU Driver Version (valid for GPU only)
Issue Type( questions, new requirements, bugs)
How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hardware Platform (Jetson / GPU)
NVIDIA jetson AXG ORIN
DeepStream Version
6.2
JetPack Version (valid for Jetson only)
5.1
TensorRT Version
8.5.2.2
NVIDIA GPU Driver Version (valid for GPU only)
CUDA11.4
Issue Type( questions, new requirements, bugs)
questions
How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Could you describe how you generated the calibration.txt file in your config file?

same as DeepStream-Yolo-master/docs/INT8Calibration.md tutourial. I made a directory named calibration under DeepStream-Yolo-master, and put my 1000 calib jpg images in it. Then use ‘realpath calibration/*jpg > calibration.txt’ command to create the calibrationtxt file.
And I put an empty calibration.txt, got the same ERROR message.

Did you use the OPENCV=1 macro when compiling the nvdsinfer_custom_impl_Yolo?

yes, i did.

Is your model exported according to the following README? I have tried with this model on Orin with DeepStream 6.2, it works well.
Could you attach your model to us?

best.onnx (11.5 MB)
best.pt (5.9 MB)
Here are my models(yolov8n).
yes, I followed README, but best.pt was trained on RTX 3090 and windows system, then I failed to export best.pt on ORIN Development Kit because ValueError: Unsupported ONNX opset version: 16. I installed torch CPU version(1.15), exported best.onnx successfully.

Could you change the int8-calib-file=calibration.txt to int8-calib-file=calib.table?

I have tried it. But it is the same error.
BTW, what the calib.table mean? I changed to calib-file-name=calib.table, named txt file calibration.txt (file not found)and calib.table(the same error),both failed.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

It is a generated calibration file when you run the deepstream-app command. No matter what name you assign to this file, you need to ensure that the current directory does not have the file.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.