Please provide the following information when requesting support.
• Hardware 3090
• Network Type Detectnet_v2
• TLT Version 3.22.05
In the TOA object detection (detectnet_v2) documentation I noticed the there are 3 ways to convert a model to INT8. I’ve been using option 1 (generate calibration tensorfile) but it will be depricated. The documentation states for option 3:
Option 3: Using the training data loader directly to load the training images for INT8 calibration. This option is now the recommended approach as it helps to generate multiple random samples. This also ensures two important aspects of the data during calibration:
Data pre-processing in the INT8 calibration step is the same as in the training process.
The data batches are sampled randomly across the entire training dataset, thereby improving the accuracy of the int8 model.
Calibration occurs as a one-step process with the data batches being generated on the fly.
Can you give me any direction on how to use option 3?