Hi there. My setup is the following:
Jetson Xavier
DeepStream 5.0
JetPack 4.4
TensorRT 7.1.3
NVIDIA GPU Driver Version 10.2
I was trying to calibrate my own yolov3 model and generate the int8 engine. I followed the method mentioned here step by step:How can I generating the correct int8 calibration table with parsing YOLO model? · Issue #747 · NVIDIA/TensorRT · GitHub
I was able to compile and generate the my own calibration cache and int8 engine file and conduct inference. However the generated engine provides no detection result. As I replaced my calibration cache with the given example “yolov3-calibration.table.trt7.0”, the generated int8 engine could detect objects correctly.
So I’m wondering if anyone can tell me how “yolov3-calibration.table.trt7.0” is generated in the first place, or what’s possible that went wrong in my process?
Let me know if there is anything else you need to know. Any advice is appreciated!
Thanks