Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc)
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) Classification_tf2 and AutoML
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) v5.0.0
• Training spec file(If have, please share here) tao-getting-started_v5.0.0/notebooks/tao_launcher_starter_kit/classification_tf2/tao_voc/specs/spec.yaml AND https://github.com/NVIDIA/tao_front_end_services/blob/main/api/specs_utils/specs/classification_tf2/classification_tf2%20-%20train.csv
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
For the same classification_tf2 model of EfficientNet_B0, there seem to be 2 different ways in which one can mention training specs.
- Mentioned in tao-getting-started_v5.0.0/notebooks/tao_launcher_starter_kit/classification_tf2/tao_voc/specs/spec.yaml
There are choices for LR schedule - cosine, step etc. And they are present under train.lr_config as seen below
- Mentioned on the latest github repo https://github.com/NVIDIA/tao_front_end_services/blob/main/api/specs_utils/specs/classification_tf2/classification_tf2%20-%20train.csv
Don’t see any choices for LR schedule and the train.lr_config doesn’t seem to exist anymore. I also see added options under train.optim_config but no way to tell which belong to which type of optimizer, eg SGD, Adam, Adadelta etc.
Here are my questions.
-
I was able to train a model with both specs, but which spec file should be followed for consistency purposes? I am guessing the new one because of its recency.
-
For the new one, there is no documentation regarding what choices of hyperparams are available, is there any place where I can look it up?