Missing out the last character at License Plate Recognition

Please provide the following information when requesting support.
• Hardware: Gefore 1050ti
• Network Type: LPRnet
• TLT Version: http://nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.08-py3
• Training spec file

random_seed: 42
lpr_config {
  hidden_units: 512
  max_label_length: 9 
  arch: "baseline"
  nlayers: 18 #setting nlayers to be 10 to use baseline10 model

training_config {
  batch_size_per_gpu: 32
  num_epochs: 5
  learning_rate {
  soft_start_annealing_schedule {
    min_learning_rate: 1e-6
    max_learning_rate: 1e-4
    soft_start: 0.001
    annealing: 0.7
  regularizer {
    type: L2
    weight: 5e-4
eval_config {
  validation_period_during_training: 5
  batch_size: 1

augmentation_config {
    output_width: 96
    output_height: 48
    output_channel: 3
    max_rotate_degree: 5
    rotate_prob: 0.5
    keep_original_prob: 0.3

dataset_config {
  data_sources: {
    label_directory_path: '/workspace/tlt-experiments/licenseplate_dataset_lprnet/train_Lao_VN_v4/train/labels'
    image_directory_path: '/workspace/tlt-experiments/licenseplate_dataset_lprnet/train_Lao_VN_v4/train/images'
  characters_list_file: '/workspace/tlt-experiments/lprnet/specs/lao_vn_lp_characters.txt'
  validation_data_sources: {
    label_directory_path: '/workspace/tlt-experiments/licenseplate_dataset_lprnet/Lao_LPR_v2_Split/valid/labels'
    image_directory_path: '/workspace/tlt-experiments/licenseplate_dataset_lprnet/Lao_LPR_v2_Split/valid/images'

• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
Given a license plate image, LPRNet should recognize all the characters in the image. For example, “43A 123456”
However, after integrating the LPRNet to the Deepstream, the result was 43A 12345. The number 6 was disappeared.

This did not happen when using the LPRNet standalone.

This phenomenon only happened several times.

The application is carried out in two phases:

  • License Plate Detection → License Plate Recognition.

Real Example
The image below is the a real image

  • Model prediction: 37C1954
  • Expected Result: 37C19545

How about running inference against the tlt model? Is it correct?

The prediction results from tlt model are correct.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.

So, you can run your standalone inference code well, right?

May I know is it running with GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream ?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.