Retrain smaller LPRnet

Please provide the following information when requesting support.

• Hardware NVIDIA A5000x8
• Network Type (LPRnet)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) format_version: 2.0 / toolkit_version: 3.22.05 / v3.21.11-py3
• Training spec file(If have, please share here)
tutorial_spec.txt (1.2 KB)

I want to retrain LPRnet and I saw that in the git example for Python, there is a smaller model.
Is there any way to make the LPR model smaller for faster inference?

I tried changing the nlayers in config to 10, but then the Accuracy stays at 0. Is this the correct approach?

Could you share the training log? More, could you use TAO 4.0 or 5.0 to check as well?

Logs below. It seems that this time the Accuracy is increasing at epoch 55 but it is very low. For the baseline18 model, it works fine, I get 90% Accuracy with these specs.
b10logs.txt (31.1 KB)

Could you share the logs when you run baseline18?

I am afraid some parameters are needed to finetuned for baseline10.

The logs are below.
b18logs.txt (37.3 KB)

The loss decreases quicker than baseline10. For baseline10, could you finetune parameters for max_learning_rate ,batch_size_per_gpu and others?
For example,
batch_size_per_gpu: 16
max_learning_rate: 1e-4
annealing: 0.7

The accuracy started to improve using these values and it reached 90%. Thanks!

Is there anything else that can be done to reduce the model size further? It is indeed smaller, but still bigger than the one used in the example.

Could you share the link?

Sorry. I was comparing the .tlt with exported .etlt version. The .etlt version is indeed smaller.

Thanks for your help.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.