Retrain DetectNet_v2 with QAT=True

Please provide the following information when requesting support.

• Network Type (Detectnet_v2)
• TLT Version (TAO v5.5.0)

I have trained DetectNetv2 with QAT=True initially. Then, I have pruned the model and trying to retrain the model with QAT=True again. I’m not able to retrain the model with QAT enabled.

Any specific reason why we can’t train multiple times with QAT enabled for DetectNetv2 ?

Attaching the reference here as previous discussion. Wanted to know any changes have been implemented now for supporting QAT again for DetectNetv2 while retraining

Please suggest. Thanks in advance

[quote=“venkatesha2411, post:1, topic:328209”]

Can you double check?
Please

  • Set the load_graph flag to true, so that the newly pruned model structure is imported.

Refer to Improving INT8 Accuracy Using Quantization Aware Training and the NVIDIA TAO Toolkit | NVIDIA Technical Blog.

Hi!
Thanks for the inputs,

  1. we verified with set log_graph = true and I have briefed steps followed below:

We checked with initially training Model with QAT = True → Pruned the trained model → Retrain the pruned model with QAT enabled.

But this still fails while execution. Could you please confirm on this ? Is it a bug or any reason is it restricted for DetectNetv2 model ?

Could you please share full logs and commands? Thanks a lot.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.