Issue about fine-tuning trained AutoML TLT models

Hello. According to the doc BYOM Image Classification - NVIDIA Docs, the format of custom model is tltb file.

However. The format of models which are trained by AutoML is TLT.

Is it possible to set the value of model_config.byom_model to a tlt file to fine-tune the trained AutoML tlt models ?

You can use the trained AutoML tlt model as pretrained model. It is a .tlt model which can be finetuned in corresponding network in TAO.
It is not related to BYOM.

Thank you @Morganh. How could I do to fine-tune my trained AutoML tlt model via Tao Toolkit API ?

For example, for YOLOv4, you can set it into pretrain_model_path: xxx.tlt

Thank you @Morganh. For multi-class classification, I have to set the trained AutoML model into train_config.pretrained_model_path: xxx.tlt ?

Besides. Will I still need to set the value of ptm_information and patch it by requests as the picture below if I fine-tune the trained AutoML model ?

Please refer to

After saving the best model obtained from AutoML, you can plug the model and spec file in the end-to-end notebook and then prune and optimize the model for inference.

Diagram starts with training using AutoML and moves through model evaluation, pruning, re-training, model evaluation again, to exporting and then optimizing the model for inference.
Figure 3. End-to-end workflow from AutoML training to model optimization

To plug the model into the new notebook, copy the train job ID from the AutoML notebook. The AutoML train job ID is printed when you run the training job.

train_job_id = subprocess.getoutput(f"tao-client {model_name} model-train --id " + model_id) print(train_job_id)

Does model_id belong to trained model obtained from AutoML ?

Yes, it is.

OK. How could I get model_id that is from trained AutoML model in a new notebook by tao-client command ?

Refer to , for example,

The spec file for the best-performing experiment is stored in the following directory:


The best model for this experiment was ID 9 with a mAP of 0.627. This is stored in the best_model/recommendataion_9.kitti file.

You can find the model_id using “ls” or etc.

I see.
Excuse me. Could I modify the parameter named freeze_blocks in the process of fine-tuning if the architecture of my trained AutoML model is ResNet ?

Which network did you run for the end2end notebook?

I trained Nvidia pretrained ResNet 50 network for the AutoML notebook first.

Now I would like to fine-tune the trained AutoML network with new data in the end2end notebook

Which end2end notebook will you run?

I ran in the end2end notebook named multiclass_classification.ipynb

Yes, you can apply changes to the specs you want to modify.

Thank you @Morganh .

Excuse me. Which api command should I use in the process of fine-tuing the trained model ?
Is it endpoint = f"{base_url}/model/{model_id}/specs/train/schema" or endpoint = f"{base_url}/model/{model_id}/specs/retrain/schema" ?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

You can use the 1st one.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.