Please provide the following information when requesting support.
• Hardware (RTX3090)
• Network Type (Classification)
• TLT Version (latest)
The TAO image classification documentation states in the notes section the following: “When exporting a model trained with Quantization Aware Training (QAT) enabled, the tensor scale factors to calibrate the activations are peeled out of the model and serialized to a TensorRT readable cache file defined by the
While there’s nothing in the specfiles and adding enable_qat: true gives an error. Is it possible to use QAT in (re)training an image classifier?