Please provide the following information when requesting support.
• Hardware NVIDIA A5000x8
• Network Type (LPRnet)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) format_version: 2.0 / toolkit_version: 3.22.05 / v3.21.11-py3
• Training spec file(If have, please share here) tutorial_spec.txt (1.2 KB)
I want to retrain LPRnet and I saw that in the git example for Python, there is a smaller model.
Is there any way to make the LPR model smaller for faster inference?
I tried changing the nlayers in config to 10, but then the Accuracy stays at 0. Is this the correct approach?
Logs below. It seems that this time the Accuracy is increasing at epoch 55 but it is very low. For the baseline18 model, it works fine, I get 90% Accuracy with these specs. b10logs.txt (31.1 KB)
The loss decreases quicker than baseline10. For baseline10, could you finetune parameters for max_learning_rate ,batch_size_per_gpu and others?
For example,
batch_size_per_gpu: 16
max_learning_rate: 1e-4
annealing: 0.7