Tlt int8 qat

Hello,

I would like to try the Quantization Aware Training tool proposed by Transfer Learning Toolkit.
I have a few questions:

Thanks for your help

When I try to start training a faster rcnn model with TLT2,
I get the following error:

google.protobuf.text_format.ParseError: 51:1 : Message type “TrainingConfig” has no field named “enable_qat”.

Could you please confirm me that QAT INT8 in not available for faster rcnn ?