Please provide complete information as applicable to your setup.
**• Jetson **
• DeepStream 6.0 and 5.1
• jetson Pack 4.6
• TensorRT 8.0
I try to run int8 in yolox, the result is wrong, but fp16 is ok.and in othe model, the results are ok in int8 mode. I don’t know how to check the issue, can you give some suggestion?
the follow is the github:
GitHub - egbertYeah/yolox_deepstream to do inference.
tensorrt_demos/onnx_to_tensorrt.py at master · jkjung-avt/tensorrt_demos · GitHub to do int8 calibrator
1.use onnx_to_tensorrt.py to create calib bin file and trt file.
2. change deepstream config int8-calib-file ,model-engine-file and network-mode.
3. run deepstream-app