How to set soft_start_annealing_schedule prams to training process reach to num_epochs?

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc) : GTX-1080ti
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc): lprnet/detectnet_v2
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here): TLT:3.0, docker_tag:v3.0-py3

Hi,
I want to train lprnet and detectnet_v2 on custom dataset, and this is my learning rate schedule:

training_config {
batch_size_per_gpu: 32
num_epochs: 120
learning_rate {
soft_start_annealing_schedule {
min_learning_rate: 5e-6
max_learning_rate: 5e-4
soft_start: 0.001
annealing: 0.7
}
}

The process of training continue to 61 epochs, but I set to 120.
I guest the end of training really related to values of soft_start/annealing and number of dataset.

I want to know how I can calculate these value to training process reach to num_epochs.

See NVIDIA TAO Documentation for more info about soft_start.
NVIDIA TAO Documentation

This issues is handled in final version of TLT3.0.
But in the developer preview version of TLT3.0 this bus is not fixed.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.