How to create calibration cache for int8 precision for use in trtexec?

I want to convert my onxx model to trt model with int8 precision with trtexec but how to create calibration cache for trtexec?

TensorRT Version: 7.2
GPU Type: A6000
Operating System + Version: ubuntu18.04
Python Version (if applicable): 3.8
TensorFlow Version (if applicable): 2.5
PyTorch Version (if applicable): none
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/tensorflow:21.05-tf2-py3

Hi, Please refer to the below links to perform inference in INT8
https://github.com/NVIDIA/TensorRT/blob/master/samples/opensource/sampleINT8/README.md

Thanks!