TLT training and weight freeze

Good Afternoon,

I am playing a bit with the examples provided within the TLT container.
The only thing that is not clear to me if it is possible to retrain existing models by modifying only the very last layers of the network. It seems to me that the entire network is retrained starting from the existing model.
Is that correct? Or am I misunderstanding everything?

The question is relative to the fact that I have a very small custom dataset and very large network may eventually overfit


Hang on!!!

I misread the documentation. It is all explained here!!!

and the example reported below!!