How to generate INT8 calibration cache from TAO (4.0) export

I’m trying to use TAO to train a QAT model, then export it to INT8, then finally convert it to TRT engine. And from this document, RetinaNet - NVIDIA Docs in the export section it mentioned

The export tool can generate INT8 calibration cache by ingesting training data using either of these options:

  • Option 1: Using the training data loader to load the training images for INT8 calibration. This option is now the recommended approach to support multiple image directories by leveraging the training dataset loader. This also ensures two important aspects of data during calibration:
    • Data pre-processing in the INT8 calibration step is the same as in the training process.
    • The data batches are sampled randomly across the entire training dataset, thereby improving the accuracy of the INT8 model.
  • Option 2: Pointing the tool to a directory of images that you want to use to calibrate the model. For this option, make sure to create a sub-sampled directory of random images that best represent your training dataset.

My question is how do I do Option 1: using the dataloader, since it is the recommended option ?

I couldn’t find any settings in the export command…

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Can you download notebook TAO Toolkit Quick Start Guide - NVIDIA Docs and refer to it?

!tao retinanet export -m $USER_EXPERIMENT_DIR/experiment_dir_retrain_qat/weights/retinanet_resnet18_epoch_$EPOCH.tlt  \
                      -o $USER_EXPERIMENT_DIR/experiment_dir_retrain_qat/weights/retinanet_resnet18_epoch_$EPOCH.etlt \
                      -k $KEY \
                      -e $SPECS_DIR/retinanet_retrain_resnet18_kitti_qat.txt \
                      --cal_json_file $USER_EXPERIMENT_DIR/export_qat/cal.json \
                      --gen_ds_config

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.