I am recently getting into TensorRT by optimizing YOLOv3-608. I want to use INT8 precision. However I cant really understand how to create a calibrator and use it. Is there a sample about it in TRT6 ?
Hi,
Please refer to below links:
yolov3 to ONNX sample:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-601/tensorrt-developer-guide/index.html#import_onnx_python
Int 8 calibration sample in C++:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-601/tensorrt-developer-guide/index.html#enable_int8_c
https://github.com/NVIDIA/TensorRT/tree/release/6.0/samples/opensource/sampleINT8API
Int 8 calibration sample in python:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-601/tensorrt-developer-guide/index.html#enable_int8_python
Thanks
if you want to how to create a calibrator, you can refer to https://github.com/lewes6369/TensorRT-Yolov3. then you can use GitHub - zerollzeng/tensorrt-zoo: openpose, yolov3 with tiny-tensorrt, it has yolov3 int8 support