The result of yolox is wrong in int8 mode

Please provide complete information as applicable to your setup.

**• Jetson **
• DeepStream 6.0 and 5.1
• jetson Pack 4.6
• TensorRT 8.0
I try to run int8 in yolox, the result is wrong, but fp16 is ok.and in othe model, the results are ok in int8 mode. I don’t know how to check the issue, can you give some suggestion?

the follow is the github:
inference:
GitHub - egbertYeah/yolox_deepstream to do inference.
int8 calibrator:
tensorrt_demos/onnx_to_tensorrt.py at master · jkjung-avt/tensorrt_demos · GitHub to do int8 calibrator

the step:
1.use onnx_to_tensorrt.py to create calib bin file and trt file.
2. change deepstream config int8-calib-file ,model-engine-file and network-mode.
3. run deepstream-app

did you generate the INT8 calibration table based on the tipical inference images? And, how many images did you use to generate the INT8 calibration table?

yes.I create the calibration table, the number of images is 1000.

what do you mean by “wrong”? Does it totally not work or just accuracy is not so good?
You may could refer to TensorRT/INT8 Accuracy - eLinux.org to narrow down the INT8 accuracy.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.