Calibaration file generation from TAO toolkit

Hi All,


Can you please provide the How to put the number of batches and batch size in generating the calibration file in training custom model .If i have 8500 images .

Thanks in Advance


Please refer to the following.

Thank you.

Can you please give me a hint of how to choose it

For more information, please refer to the documents listed above.

#The following steps illustrate how to create an INT8 calibrator object using the Python API.

#Import TensorRT:
import tensorrt as trt

# Similar to test/validation datasets, use a set of input files as a calibration dataset. Make sure that the calibration files are representative of the overall inference data files.
# For TensorRT to use the calibration files, you must create a batchstream object. A batchstream object is used to configure the calibrator.
batchstream = ImageBatchStream(NUM_IMAGES_PER_BATCH, calibration_files)

# Create an Int8_calibrator object with input nodes names and batch stream:
Int8_calibrator = EntropyCalibrator(["input_node_name"], batchstream)

#Set INT8 mode and INT8 calibrator:
config.int8_calibrator = Int8_calibrator

A portion of TAO detectnet_v2 noteook below ,

A. Int8 Optimization
DetectNet_v2 model supports int8 inference mode in TRT. In order to use int8 mode, we must calibrate the model to run 8-bit inferences. This involves 2 steps

Generate calibration tensorfile from the training data using tlt-int8-tensorfile
Use tlt-export to generate int8 calibration table.
Note: For this example, we generate a calibration tensorfile containing 10 batches of training data. Ideally, **it is best to use atleast 10-20% of the training data to calibrate the model**. The more data provided during calibration, the closer int8 inferences are to fp32 inferences.

!tlt-int8-tensorfile detectnet_v2 -e $SPECS_DIR/detectnet_v2_retrain_resnet18_kitti.txt \
                                  -m 40 \
                                  -o $USER_EXPERIMENT_DIR/experiment_dir_final/calibration.tensor
!rm -rf $USER_EXPERIMENT_DIR/experiment_dir_final/resnet18_detector.etlt
!rm -rf $USER_EXPERIMENT_DIR/experiment_dir_final/calibration.bin
!tlt-export detectnet_v2 \
            -m $USER_EXPERIMENT_DIR/experiment_dir_retrain/weights/resnet18_detector_pruned.tlt \
            -o $USER_EXPERIMENT_DIR/experiment_dir_final/resnet18_detector.etlt \
            -k $KEY  \
            --cal_data_file $USER_EXPERIMENT_DIR/experiment_dir_final/calibration.tensor \
            --data_type int8 \
            --**batches 20** \
              --**batch_size 4** \
            --max_batch_size 4\
            --engine_file $USER_EXPERIMENT_DIR/experiment_dir_final/resnet18_detector.trt.int8 \
            --cal_cache_file $USER_EXPERIMENT_DIR/experiment_dir_final/calibration.bin \

In my case i have 8500 images so i put batches as 2125 and batch size as 4
(2125*5=8500).Is this approach correct?Also i had give 10% of training data during the generation of Caliberation file but while putting custom trained model and Caliberation file in deepstream test3 GPU is full at 9th stream input


We are moving this post to the TAO Toolkit forum to get better help.

Thank you.

Thanks for the support,How will i contact the TAO team.

Yes, you can set batches and batch_size . 2125*4 = 8500.
You can use 10% of images to generate. batches=212, batch_size=4.

Then get the calibration.bin file and int8 tensorrt engine.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.