Transfer learnign LPRnet to recognise texts other than license plates

The LPR works very well on license plates and we wanted to check the possibility of going for transfer learning to train it to read specific labels. Will it work? If so, could you please provide basic tips on how to commence as this is the first time I will be training a text recognition model?

Our use case involves detecting a lable using detectnet_v2. The labels are detected with 90%+ accuracy. It is working extremely well. The implementation is similar to the LPD+LPR where first car is detected, then license plates and then LPR is run. With exactly the same flow for something other than license plates, would you suggest us to go for the transfer learnign mentioned above? Or is there any other readily available model or library that you would suggest?

May I know more about the specific labels? Could you share an example?

Hi Morgan,

I have sent a PM with details.

The text is warp. So it is not similar to current LPR’s case.
We will have STN feature in future release for LPR.
It will be workable for warp text.

Thanks Morgan, any way around for now? Is there any OCR or such option that we could implement without changing so much?

further to my previous question, will this work as sgie for recognition part on ROI? GitHub - NVIDIA-AI-IOT/scene-text-recognition.

Also, how do I enable line crossing (nvdsanalytics) in LPD/LPR example. It looks like all the code and everythign is there, but can’t put a finger on what’s missing or how to activate it.

Yes, for your case, suggest you to crop all the yellow object instead of the ears. And then use LPRnet to train. Use pretrained LPRNet model as the pretained model. And also set higher rotation degree in the training spec file.

Thanks, Morgan, we will do that. Sorry if my question is too basic, we are planning to train it with a training image size of around 224x224 or if possible larger. Will that be suitable for LPRnet?

It is fine.

Hi Riddhi,
Could you share the error log? Please share training spec with us as well. Thanks.

Hi Morgan,

following is the spec file:

random_seed: 42
lpr_config {
  hidden_units: 512
  max_label_length: 8
  arch: "baseline"
  nlayers: 18 #setting nlayers to be 10 to use baseline10 model
}
training_config {
  batch_size_per_gpu: 32
  num_epochs: 24
  learning_rate {
  soft_start_annealing_schedule {
    min_learning_rate: 1e-6
    max_learning_rate: 1e-5
    soft_start: 0.001
    annealing: 0.5
  }
  }
  regularizer {
    type: L2
    weight: 5e-4
  }
}
eval_config {
  validation_period_during_training: 5
  batch_size: 1
}
augmentation_config {
    output_width: 64
    output_height: 64
    output_channel: 3
    max_rotate_degree: 5
    rotate_prob: 0.5
    gaussian_kernel_size: 5
    gaussian_kernel_size: 7
    gaussian_kernel_size: 15
    blur_prob: 0.5
    reverse_color_prob: 0.5
    keep_original_prob: 0.3
}
dataset_config {
  data_sources: {
    label_directory_path: "/workspace/tao-experiments/data/openalpr/train/label"
    image_directory_path: "/workspace/tao-experiments/data/openalpr/train/image"
  }
  characters_list_file: "/workspace/tao-experiments/lprnet/specs/us_lp_characters.txt"
  validation_data_sources: {
    label_directory_path: "/workspace/tao-experiments/data/openalpr/val/label"
    image_directory_path: "/workspace/tao-experiments/data/openalpr/val/image"
  }
}

(We have done offline augmentation so even tried to remove this part from the spec file but then also it gives an error)

Error log:

For multi-GPU, change --gpus based on your machine.
2022-03-16 06:06:24,693 [INFO] root: Registry: ['nvcr.io']
2022-03-16 06:06:25,883 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.11-tf1.15.5-py3
2022-03-16 06:06:25,924 [WARNING] tlt.components.docker_handler.docker_handler: 
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the "user":"UID:GID" in the
DockerOptions portion of the "/home/ubuntu/.tao_mounts.json" file. You can obtain your
users UID and GID by using the "id -u" and "id -g" commands on the
terminal.
Using TensorFlow backend.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:57: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

2022-03-16 06:06:40,759 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:57: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

Using TensorFlow backend.
WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:57: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

2022-03-16 06:06:40,759 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:57: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

Using TensorFlow backend.
WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:57: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

2022-03-16 06:06:40,759 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:57: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

Using TensorFlow backend.
WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:57: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

2022-03-16 06:06:40,759 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:57: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:60: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:60: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:60: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:60: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2022-03-16 06:06:40,759 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:60: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2022-03-16 06:06:40,759 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:60: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2022-03-16 06:06:40,759 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:60: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2022-03-16 06:06:40,759 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:60: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:61: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.

2022-03-16 06:06:42,244 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:61: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.

2022-03-16 06:06:42,244 [INFO] iva.lprnet.utils.spec_loader: Merging specification from /workspace/tao-experiments/lprnet/specs/tutorial_spec.txt
2022-03-16 06:06:42,246 [INFO] __main__: Loading pretrained weights. This may take a while...
WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:61: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.

2022-03-16 06:06:42,267 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:61: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.

2022-03-16 06:06:42,268 [INFO] iva.lprnet.utils.spec_loader: Merging specification from /workspace/tao-experiments/lprnet/specs/tutorial_spec.txt
2022-03-16 06:06:42,269 [INFO] __main__: Loading pretrained weights. This may take a while...
WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:61: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.

2022-03-16 06:06:42,280 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:61: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.

2022-03-16 06:06:42,280 [INFO] iva.lprnet.utils.spec_loader: Merging specification from /workspace/tao-experiments/lprnet/specs/tutorial_spec.txt
2022-03-16 06:06:42,282 [INFO] __main__: Loading pretrained weights. This may take a while...
WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:61: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.

2022-03-16 06:06:42,294 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:61: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.

2022-03-16 06:06:42,294 [INFO] iva.lprnet.utils.spec_loader: Merging specification from /workspace/tao-experiments/lprnet/specs/tutorial_spec.txt
2022-03-16 06:06:42,296 [INFO] __main__: Loading pretrained weights. This may take a while...
The shape of this layer does not match original model: lstm
Loading the model as a pruned model.
The shape of this layer does not match original model: lstm
Loading the model as a pruned model.
The shape of this layer does not match original model: lstm
Loading the model as a pruned model.
The shape of this layer does not match original model: lstm
Loading the model as a pruned model.
WARNING:tensorflow:No training configuration found in save file: the model was *not* compiled. Compile it manually.
2022-03-16 06:07:28,322 [WARNING] tensorflow: No training configuration found in save file: the model was *not* compiled. Compile it manually.
Initialize optimizer
WARNING:tensorflow:No training configuration found in save file: the model was *not* compiled. Compile it manually.
2022-03-16 06:07:28,659 [WARNING] tensorflow: No training configuration found in save file: the model was *not* compiled. Compile it manually.
Initialize optimizer
WARNING:tensorflow:No training configuration found in save file: the model was *not* compiled. Compile it manually.
2022-03-16 06:07:28,927 [WARNING] tensorflow: No training configuration found in save file: the model was *not* compiled. Compile it manually.
Initialize optimizer
WARNING:tensorflow:No training configuration found in save file: the model was *not* compiled. Compile it manually.
2022-03-16 06:07:29,247 [WARNING] tensorflow: No training configuration found in save file: the model was *not* compiled. Compile it manually.
Initialize optimizer
Model: "lpnet_baseline_18"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
image_input (InputLayer)        [(None, 3, 48, 96)]  0                                            
__________________________________________________________________________________________________
tf_op_layer_Sum (TensorFlowOpLa (None, 1, 48, 96)    0           image_input[0][0]                
__________________________________________________________________________________________________
conv1 (Conv2D)                  (None, 64, 48, 96)   640         tf_op_layer_Sum[0][0]            
__________________________________________________________________________________________________
bn_conv1 (BatchNormalization)   (None, 64, 48, 96)   256         conv1[0][0]                      
__________________________________________________________________________________________________
re_lu (ReLU)                    (None, 64, 48, 96)   0           bn_conv1[0][0]                   
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D)    (None, 64, 48, 96)   0           re_lu[0][0]                      
__________________________________________________________________________________________________
res2a_branch2a (Conv2D)         (None, 64, 48, 96)   36928       max_pooling2d[0][0]              
__________________________________________________________________________________________________
bn2a_branch2a (BatchNormalizati (None, 64, 48, 96)   256         res2a_branch2a[0][0]             
__________________________________________________________________________________________________
re_lu_1 (ReLU)                  (None, 64, 48, 96)   0           bn2a_branch2a[0][0]              
__________________________________________________________________________________________________
res2a_branch1 (Conv2D)          (None, 64, 48, 96)   4160        max_pooling2d[0][0]              
__________________________________________________________________________________________________
res2a_branch2b (Conv2D)         (None, 64, 48, 96)   36928       re_lu_1[0][0]                    
__________________________________________________________________________________________________
bn2a_branch1 (BatchNormalizatio (None, 64, 48, 96)   256         res2a_branch1[0][0]              
__________________________________________________________________________________________________
bn2a_branch2b (BatchNormalizati (None, 64, 48, 96)   256         res2a_branch2b[0][0]             
__________________________________________________________________________________________________
tf_op_layer_add (TensorFlowOpLa (None, 64, 48, 96)   0           bn2a_branch1[0][0]               
                                                                 bn2a_branch2b[0][0]              
__________________________________________________________________________________________________
re_lu_2 (ReLU)                  (None, 64, 48, 96)   0           tf_op_layer_add[0][0]            
__________________________________________________________________________________________________
res2b_branch2a (Conv2D)         (None, 64, 48, 96)   36928       re_lu_2[0][0]                    
__________________________________________________________________________________________________
bn2b_branch2a (BatchNormalizati (None, 64, 48, 96)   256         res2b_branch2a[0][0]             
__________________________________________________________________________________________________
re_lu_3 (ReLU)                  (None, 64, 48, 96)   0           bn2b_branch2a[0][0]              
__________________________________________________________________________________________________
res2b_branch2b (Conv2D)         (None, 64, 48, 96)   36928       re_lu_3[0][0]                    
__________________________________________________________________________________________________
bn2b_branch2b (BatchNormalizati (None, 64, 48, 96)   256         res2b_branch2b[0][0]             
__________________________________________________________________________________________________
tf_op_layer_add_1 (TensorFlowOp (None, 64, 48, 96)   0           re_lu_2[0][0]                    
                                                                 bn2b_branch2b[0][0]              
__________________________________________________________________________________________________
re_lu_4 (ReLU)                  (None, 64, 48, 96)   0           tf_op_layer_add_1[0][0]          
__________________________________________________________________________________________________
res3a_branch2a (Conv2D)         (None, 128, 24, 48)  73856       re_lu_4[0][0]                    
__________________________________________________________________________________________________
bn3a_branch2a (BatchNormalizati (None, 128, 24, 48)  512         res3a_branch2a[0][0]             
__________________________________________________________________________________________________
re_lu_5 (ReLU)                  (None, 128, 24, 48)  0           bn3a_branch2a[0][0]              
__________________________________________________________________________________________________
res3a_branch1 (Conv2D)          (None, 128, 24, 48)  8320        re_lu_4[0][0]                    
__________________________________________________________________________________________________
res3a_branch2b (Conv2D)         (None, 128, 24, 48)  147584      re_lu_5[0][0]                    
__________________________________________________________________________________________________
bn3a_branch1 (BatchNormalizatio (None, 128, 24, 48)  512         res3a_branch1[0][0]              
__________________________________________________________________________________________________
bn3a_branch2b (BatchNormalizati (None, 128, 24, 48)  512         res3a_branch2b[0][0]             
__________________________________________________________________________________________________
tf_op_layer_add_2 (TensorFlowOp (None, 128, 24, 48)  0           bn3a_branch1[0][0]               
                                                                 bn3a_branch2b[0][0]              
__________________________________________________________________________________________________
re_lu_6 (ReLU)                  (None, 128, 24, 48)  0           tf_op_layer_add_2[0][0]          
__________________________________________________________________________________________________
res3b_branch2a (Conv2D)         (None, 128, 24, 48)  147584      re_lu_6[0][0]                    
__________________________________________________________________________________________________
bn3b_branch2a (BatchNormalizati (None, 128, 24, 48)  512         res3b_branch2a[0][0]             
__________________________________________________________________________________________________
re_lu_7 (ReLU)                  (None, 128, 24, 48)  0           bn3b_branch2a[0][0]              
__________________________________________________________________________________________________
res3b_branch2b (Conv2D)         (None, 128, 24, 48)  147584      re_lu_7[0][0]                    
__________________________________________________________________________________________________
bn3b_branch2b (BatchNormalizati (None, 128, 24, 48)  512         res3b_branch2b[0][0]             
__________________________________________________________________________________________________
tf_op_layer_add_3 (TensorFlowOp (None, 128, 24, 48)  0           re_lu_6[0][0]                    
                                                                 bn3b_branch2b[0][0]              
__________________________________________________________________________________________________
re_lu_8 (ReLU)                  (None, 128, 24, 48)  0           tf_op_layer_add_3[0][0]          
__________________________________________________________________________________________________
res4a_branch2a (Conv2D)         (None, 256, 12, 24)  295168      re_lu_8[0][0]                    
__________________________________________________________________________________________________
bn4a_branch2a (BatchNormalizati (None, 256, 12, 24)  1024        res4a_branch2a[0][0]             
__________________________________________________________________________________________________
re_lu_9 (ReLU)                  (None, 256, 12, 24)  0           bn4a_branch2a[0][0]              
__________________________________________________________________________________________________
res4a_branch1 (Conv2D)          (None, 256, 12, 24)  33024       re_lu_8[0][0]                    
__________________________________________________________________________________________________
res4a_branch2b (Conv2D)         (None, 256, 12, 24)  590080      re_lu_9[0][0]                    
__________________________________________________________________________________________________
bn4a_branch1 (BatchNormalizatio (None, 256, 12, 24)  1024        res4a_branch1[0][0]              
__________________________________________________________________________________________________
bn4a_branch2b (BatchNormalizati (None, 256, 12, 24)  1024        res4a_branch2b[0][0]             
__________________________________________________________________________________________________
tf_op_layer_add_4 (TensorFlowOp (None, 256, 12, 24)  0           bn4a_branch1[0][0]               
                                                                 bn4a_branch2b[0][0]              
__________________________________________________________________________________________________
re_lu_10 (ReLU)                 (None, 256, 12, 24)  0           tf_op_layer_add_4[0][0]          
__________________________________________________________________________________________________
res4b_branch2a (Conv2D)         (None, 256, 12, 24)  590080      re_lu_10[0][0]                   
__________________________________________________________________________________________________
bn4b_branch2a (BatchNormalizati (None, 256, 12, 24)  1024        res4b_branch2a[0][0]             
__________________________________________________________________________________________________
re_lu_11 (ReLU)                 (None, 256, 12, 24)  0           bn4b_branch2a[0][0]              
__________________________________________________________________________________________________
res4b_branch2b (Conv2D)         (None, 256, 12, 24)  590080      re_lu_11[0][0]                   
__________________________________________________________________________________________________
bn4b_branch2b (BatchNormalizati (None, 256, 12, 24)  1024        res4b_branch2b[0][0]             
__________________________________________________________________________________________________
tf_op_layer_add_5 (TensorFlowOp (None, 256, 12, 24)  0           re_lu_10[0][0]                   
                                                                 bn4b_branch2b[0][0]              
__________________________________________________________________________________________________
re_lu_12 (ReLU)                 (None, 256, 12, 24)  0           tf_op_layer_add_5[0][0]          
__________________________________________________________________________________________________
res5a_branch2a (Conv2D)         (None, 300, 12, 24)  691500      re_lu_12[0][0]                   
__________________________________________________________________________________________________
bn5a_branch2a (BatchNormalizati (None, 300, 12, 24)  1200        res5a_branch2a[0][0]             
__________________________________________________________________________________________________
re_lu_13 (ReLU)                 (None, 300, 12, 24)  0           bn5a_branch2a[0][0]              
__________________________________________________________________________________________________
res5a_branch1 (Conv2D)          (None, 300, 12, 24)  77100       re_lu_12[0][0]                   
__________________________________________________________________________________________________
res5a_branch2b (Conv2D)         (None, 300, 12, 24)  810300      re_lu_13[0][0]                   
__________________________________________________________________________________________________
bn5a_branch1 (BatchNormalizatio (None, 300, 12, 24)  1200        res5a_branch1[0][0]              
__________________________________________________________________________________________________
bn5a_branch2b (BatchNormalizati (None, 300, 12, 24)  1200        res5a_branch2b[0][0]             
__________________________________________________________________________________________________
tf_op_layer_add_6 (TensorFlowOp (None, 300, 12, 24)  0           bn5a_branch1[0][0]               
                                                                 bn5a_branch2b[0][0]              
__________________________________________________________________________________________________
re_lu_14 (ReLU)                 (None, 300, 12, 24)  0           tf_op_layer_add_6[0][0]          
__________________________________________________________________________________________________
res5b_branch2a (Conv2D)         (None, 300, 12, 24)  810300      re_lu_14[0][0]                   
__________________________________________________________________________________________________
bn5b_branch2a (BatchNormalizati (None, 300, 12, 24)  1200        res5b_branch2a[0][0]             
__________________________________________________________________________________________________
re_lu_15 (ReLU)                 (None, 300, 12, 24)  0           bn5b_branch2a[0][0]              
__________________________________________________________________________________________________
res5b_branch2b (Conv2D)         (None, 300, 12, 24)  810300      re_lu_15[0][0]                   
__________________________________________________________________________________________________
bn5b_branch2b (BatchNormalizati (None, 300, 12, 24)  1200        res5b_branch2b[0][0]             
__________________________________________________________________________________________________
tf_op_layer_add_7 (TensorFlowOp (None, 300, 12, 24)  0           re_lu_14[0][0]                   
                                                                 bn5b_branch2b[0][0]              
__________________________________________________________________________________________________
re_lu_16 (ReLU)                 (None, 300, 12, 24)  0           tf_op_layer_add_7[0][0]          
__________________________________________________________________________________________________
permute_feature (Permute)       (None, 24, 12, 300)  0           re_lu_16[0][0]                   
__________________________________________________________________________________________________
flatten_feature (Reshape)       (None, 24, 3600)     0           permute_feature[0][0]            
__________________________________________________________________________________________________
lstm (LSTM)                     (None, 24, 512)      8423424     flatten_feature[0][0]            
__________________________________________________________________________________________________
td_dense (TimeDistributed)      (None, 24, 36)       18468       lstm[0][0]                       
__________________________________________________________________________________________________
softmax (Softmax)               (None, 24, 36)       0           td_dense[0][0]                   
==================================================================================================
Total params: 14,432,480
Trainable params: 14,424,872
Non-trainable params: 7,608
__________________________________________________________________________________________________
2022-03-16 06:07:29,293 [INFO] __main__: Number of images in the training dataset:	  3406
2022-03-16 06:07:29,293 [INFO] __main__: Number of images in the validation dataset:	   306
Traceback (most recent call last):
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py", line 279, in <module>
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py", line 275, in main
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py", line 200, in run_experiment
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 727, in fit
    use_multiprocessing=use_multiprocessing)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_generator.py", line 603, in fit
    steps_name='steps_per_epoch')
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_generator.py", line 265, in model_iteration
    batch_outs = batch_function(*batch_data)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 991, in train_on_batch
    extract_tensors_from_dataset=True)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 2471, in _standardize_user_data
    exception_prefix='input')
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_utils.py", line 572, in standardize_input_data
    str(data_shape))
ValueError: Error when checking input: expected image_input to have shape (3, 48, 96) but got array with shape (3, 64, 64)
Traceback (most recent call last):
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py", line 279, in <module>
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py", line 275, in main
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py", line 200, in run_experiment
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 727, in fit
    use_multiprocessing=use_multiprocessing)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_generator.py", line 603, in fit
    steps_name='steps_per_epoch')
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_generator.py", line 265, in model_iteration
    batch_outs = batch_function(*batch_data)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 991, in train_on_batch
    extract_tensors_from_dataset=True)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 2471, in _standardize_user_data
    exception_prefix='input')
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_utils.py", line 572, in standardize_input_data
    str(data_shape))
ValueError: Error when checking input: expected image_input to have shape (3, 48, 96) but got array with shape (3, 64, 64)
Epoch 1/24
Traceback (most recent call last):
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py", line 279, in <module>
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py", line 275, in main
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py", line 200, in run_experiment
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 727, in fit
    use_multiprocessing=use_multiprocessing)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_generator.py", line 603, in fit
    steps_name='steps_per_epoch')
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_generator.py", line 265, in model_iteration
    batch_outs = batch_function(*batch_data)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 991, in train_on_batch
    extract_tensors_from_dataset=True)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 2471, in _standardize_user_data
    exception_prefix='input')
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_utils.py", line 572, in standardize_input_data
    str(data_shape))
ValueError: Error when checking input: expected image_input to have shape (3, 48, 96) but got array with shape (3, 64, 64)
Traceback (most recent call last):
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py", line 279, in <module>
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py", line 275, in main
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py", line 200, in run_experiment
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 727, in fit
    use_multiprocessing=use_multiprocessing)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_generator.py", line 603, in fit
    steps_name='steps_per_epoch')
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_generator.py", line 265, in model_iteration
    batch_outs = batch_function(*batch_data)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 991, in train_on_batch
    extract_tensors_from_dataset=True)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 2471, in _standardize_user_data
    exception_prefix='input')
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_utils.py", line 572, in standardize_input_data
    str(data_shape))
ValueError: Error when checking input: expected image_input to have shape (3, 48, 96) but got array with shape (3, 64, 64)
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun.real detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

  Process name: [[46717,1],1]
  Exit code:    1
--------------------------------------------------------------------------
2022-03-16 06:07:32,366 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

Above is not related to input resolution of training images.

Since you are using pretrained model, please set

output_width: 64
output_height: 48

Om thanks. We will make the changes.

Hi Morgan,

We tried with up to 120 epochs but still, accuracy is not going beyond 46%. There are a total of 3k images (augmented). Is there anything we can do in above specs to improve the accuracy outcome?

Training log:

For multi-GPU, change --gpus based on your machine.
2022-03-16 07:31:23,600 [INFO] root: Registry: ['nvcr.io']
2022-03-16 07:31:23,662 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.11-tf1.15.5-py3
2022-03-16 07:31:23,669 [WARNING] tlt.components.docker_handler.docker_handler: 
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the "user":"UID:GID" in the
DockerOptions portion of the "/home/ubuntu/.tao_mounts.json" file. You can obtain your
users UID and GID by using the "id -u" and "id -g" commands on the
terminal.
Using TensorFlow backend.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:57: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

2022-03-16 07:31:29,889 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:57: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

Using TensorFlow backend.
WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:57: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:60: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2022-03-16 07:31:29,889 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:57: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

2022-03-16 07:31:29,889 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:60: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:60: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2022-03-16 07:31:29,890 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:60: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

Using TensorFlow backend.
WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:57: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

Using TensorFlow backend.
WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:57: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

2022-03-16 07:31:29,890 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:57: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

2022-03-16 07:31:29,890 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:57: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:60: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2022-03-16 07:31:29,890 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:60: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:60: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2022-03-16 07:31:29,890 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:60: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:61: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.

2022-03-16 07:31:30,680 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:61: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.

WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:61: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.

2022-03-16 07:31:30,680 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:61: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.

2022-03-16 07:31:30,680 [INFO] iva.lprnet.utils.spec_loader: Merging specification from /workspace/tao-experiments/lprnet/specs/tutorial_spec.txt
2022-03-16 07:31:30,680 [INFO] iva.lprnet.utils.spec_loader: Merging specification from /workspace/tao-experiments/lprnet/specs/tutorial_spec.txt
2022-03-16 07:31:30,682 [INFO] __main__: Loading pretrained weights. This may take a while...
2022-03-16 07:31:30,682 [INFO] __main__: Loading pretrained weights. This may take a while...
WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:61: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.

2022-03-16 07:31:30,705 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:61: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.

2022-03-16 07:31:30,706 [INFO] iva.lprnet.utils.spec_loader: Merging specification from /workspace/tao-experiments/lprnet/specs/tutorial_spec.txt
WARNING:tensorflow:From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:61: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.

2022-03-16 07:31:30,706 [WARNING] tensorflow: From /root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/train.py:61: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.

2022-03-16 07:31:30,707 [INFO] iva.lprnet.utils.spec_loader: Merging specification from /workspace/tao-experiments/lprnet/specs/tutorial_spec.txt
2022-03-16 07:31:30,708 [INFO] __main__: Loading pretrained weights. This may take a while...
2022-03-16 07:31:30,709 [INFO] __main__: Loading pretrained weights. This may take a while...
Initialize optimizer
Initialize optimizer
Initialize optimizer
Initialize optimizer
Model: "lpnet_baseline_18"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
image_input (InputLayer)        [(None, 3, 48, 64)]  0                                            
__________________________________________________________________________________________________
tf_op_layer_Sum (TensorFlowOpLa [(None, 1, 48, 64)]  0           image_input[0][0]                
__________________________________________________________________________________________________
conv1 (Conv2D)                  (None, 64, 48, 64)   640         tf_op_layer_Sum[0][0]            
__________________________________________________________________________________________________
bn_conv1 (BatchNormalization)   (None, 64, 48, 64)   256         conv1[0][0]                      
__________________________________________________________________________________________________
re_lu (ReLU)                    (None, 64, 48, 64)   0           bn_conv1[0][0]                   
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D)    (None, 64, 48, 64)   0           re_lu[0][0]                      
__________________________________________________________________________________________________
res2a_branch2a (Conv2D)         (None, 64, 48, 64)   36928       max_pooling2d[0][0]              
__________________________________________________________________________________________________
bn2a_branch2a (BatchNormalizati (None, 64, 48, 64)   256         res2a_branch2a[0][0]             
__________________________________________________________________________________________________
re_lu_1 (ReLU)                  (None, 64, 48, 64)   0           bn2a_branch2a[0][0]              
__________________________________________________________________________________________________
res2a_branch1 (Conv2D)          (None, 64, 48, 64)   4160        max_pooling2d[0][0]              
__________________________________________________________________________________________________
res2a_branch2b (Conv2D)         (None, 64, 48, 64)   36928       re_lu_1[0][0]                    
__________________________________________________________________________________________________
bn2a_branch1 (BatchNormalizatio (None, 64, 48, 64)   256         res2a_branch1[0][0]              
__________________________________________________________________________________________________
bn2a_branch2b (BatchNormalizati (None, 64, 48, 64)   256         res2a_branch2b[0][0]             
__________________________________________________________________________________________________
tf_op_layer_add (TensorFlowOpLa [(None, 64, 48, 64)] 0           bn2a_branch1[0][0]               
                                                                 bn2a_branch2b[0][0]              
__________________________________________________________________________________________________
re_lu_2 (ReLU)                  (None, 64, 48, 64)   0           tf_op_layer_add[0][0]            
__________________________________________________________________________________________________
res2b_branch2a (Conv2D)         (None, 64, 48, 64)   36928       re_lu_2[0][0]                    
__________________________________________________________________________________________________
bn2b_branch2a (BatchNormalizati (None, 64, 48, 64)   256         res2b_branch2a[0][0]             
__________________________________________________________________________________________________
re_lu_3 (ReLU)                  (None, 64, 48, 64)   0           bn2b_branch2a[0][0]              
__________________________________________________________________________________________________
res2b_branch2b (Conv2D)         (None, 64, 48, 64)   36928       re_lu_3[0][0]                    
__________________________________________________________________________________________________
bn2b_branch2b (BatchNormalizati (None, 64, 48, 64)   256         res2b_branch2b[0][0]             
__________________________________________________________________________________________________
tf_op_layer_add_1 (TensorFlowOp [(None, 64, 48, 64)] 0           re_lu_2[0][0]                    
                                                                 bn2b_branch2b[0][0]              
__________________________________________________________________________________________________
re_lu_4 (ReLU)                  (None, 64, 48, 64)   0           tf_op_layer_add_1[0][0]          
__________________________________________________________________________________________________
res3a_branch2a (Conv2D)         (None, 128, 24, 32)  73856       re_lu_4[0][0]                    
__________________________________________________________________________________________________
bn3a_branch2a (BatchNormalizati (None, 128, 24, 32)  512         res3a_branch2a[0][0]             
__________________________________________________________________________________________________
re_lu_5 (ReLU)                  (None, 128, 24, 32)  0           bn3a_branch2a[0][0]              
__________________________________________________________________________________________________
res3a_branch1 (Conv2D)          (None, 128, 24, 32)  8320        re_lu_4[0][0]                    
__________________________________________________________________________________________________
res3a_branch2b (Conv2D)         (None, 128, 24, 32)  147584      re_lu_5[0][0]                    
__________________________________________________________________________________________________
bn3a_branch1 (BatchNormalizatio (None, 128, 24, 32)  512         res3a_branch1[0][0]              
__________________________________________________________________________________________________
bn3a_branch2b (BatchNormalizati (None, 128, 24, 32)  512         res3a_branch2b[0][0]             
__________________________________________________________________________________________________
tf_op_layer_add_2 (TensorFlowOp [(None, 128, 24, 32) 0           bn3a_branch1[0][0]               
                                                                 bn3a_branch2b[0][0]              
__________________________________________________________________________________________________
re_lu_6 (ReLU)                  (None, 128, 24, 32)  0           tf_op_layer_add_2[0][0]          
__________________________________________________________________________________________________
res3b_branch2a (Conv2D)         (None, 128, 24, 32)  147584      re_lu_6[0][0]                    
__________________________________________________________________________________________________
bn3b_branch2a (BatchNormalizati (None, 128, 24, 32)  512         res3b_branch2a[0][0]             
__________________________________________________________________________________________________
re_lu_7 (ReLU)                  (None, 128, 24, 32)  0           bn3b_branch2a[0][0]              
__________________________________________________________________________________________________
res3b_branch2b (Conv2D)         (None, 128, 24, 32)  147584      re_lu_7[0][0]                    
__________________________________________________________________________________________________
bn3b_branch2b (BatchNormalizati (None, 128, 24, 32)  512         res3b_branch2b[0][0]             
__________________________________________________________________________________________________
tf_op_layer_add_3 (TensorFlowOp [(None, 128, 24, 32) 0           re_lu_6[0][0]                    
                                                                 bn3b_branch2b[0][0]              
__________________________________________________________________________________________________
re_lu_8 (ReLU)                  (None, 128, 24, 32)  0           tf_op_layer_add_3[0][0]          
__________________________________________________________________________________________________
res4a_branch2a (Conv2D)         (None, 256, 12, 16)  295168      re_lu_8[0][0]                    
__________________________________________________________________________________________________
bn4a_branch2a (BatchNormalizati (None, 256, 12, 16)  1024        res4a_branch2a[0][0]             
__________________________________________________________________________________________________
re_lu_9 (ReLU)                  (None, 256, 12, 16)  0           bn4a_branch2a[0][0]              
__________________________________________________________________________________________________
res4a_branch1 (Conv2D)          (None, 256, 12, 16)  33024       re_lu_8[0][0]                    
__________________________________________________________________________________________________
res4a_branch2b (Conv2D)         (None, 256, 12, 16)  590080      re_lu_9[0][0]                    
__________________________________________________________________________________________________
bn4a_branch1 (BatchNormalizatio (None, 256, 12, 16)  1024        res4a_branch1[0][0]              
__________________________________________________________________________________________________
bn4a_branch2b (BatchNormalizati (None, 256, 12, 16)  1024        res4a_branch2b[0][0]             
__________________________________________________________________________________________________
tf_op_layer_add_4 (TensorFlowOp [(None, 256, 12, 16) 0           bn4a_branch1[0][0]               
                                                                 bn4a_branch2b[0][0]              
__________________________________________________________________________________________________
re_lu_10 (ReLU)                 (None, 256, 12, 16)  0           tf_op_layer_add_4[0][0]          
__________________________________________________________________________________________________
res4b_branch2a (Conv2D)         (None, 256, 12, 16)  590080      re_lu_10[0][0]                   
__________________________________________________________________________________________________
bn4b_branch2a (BatchNormalizati (None, 256, 12, 16)  1024        res4b_branch2a[0][0]             
__________________________________________________________________________________________________
re_lu_11 (ReLU)                 (None, 256, 12, 16)  0           bn4b_branch2a[0][0]              
__________________________________________________________________________________________________
res4b_branch2b (Conv2D)         (None, 256, 12, 16)  590080      re_lu_11[0][0]                   
__________________________________________________________________________________________________
bn4b_branch2b (BatchNormalizati (None, 256, 12, 16)  1024        res4b_branch2b[0][0]             
__________________________________________________________________________________________________
tf_op_layer_add_5 (TensorFlowOp [(None, 256, 12, 16) 0           re_lu_10[0][0]                   
                                                                 bn4b_branch2b[0][0]              
__________________________________________________________________________________________________
re_lu_12 (ReLU)                 (None, 256, 12, 16)  0           tf_op_layer_add_5[0][0]          
__________________________________________________________________________________________________
res5a_branch2a (Conv2D)         (None, 300, 12, 16)  691500      re_lu_12[0][0]                   
__________________________________________________________________________________________________
bn5a_branch2a (BatchNormalizati (None, 300, 12, 16)  1200        res5a_branch2a[0][0]             
__________________________________________________________________________________________________
re_lu_13 (ReLU)                 (None, 300, 12, 16)  0           bn5a_branch2a[0][0]              
__________________________________________________________________________________________________
res5a_branch1 (Conv2D)          (None, 300, 12, 16)  77100       re_lu_12[0][0]                   
__________________________________________________________________________________________________
res5a_branch2b (Conv2D)         (None, 300, 12, 16)  810300      re_lu_13[0][0]                   
__________________________________________________________________________________________________
bn5a_branch1 (BatchNormalizatio (None, 300, 12, 16)  1200        res5a_branch1[0][0]              
__________________________________________________________________________________________________
bn5a_branch2b (BatchNormalizati (None, 300, 12, 16)  1200        res5a_branch2b[0][0]             
__________________________________________________________________________________________________
tf_op_layer_add_6 (TensorFlowOp [(None, 300, 12, 16) 0           bn5a_branch1[0][0]               
                                                                 bn5a_branch2b[0][0]              
__________________________________________________________________________________________________
re_lu_14 (ReLU)                 (None, 300, 12, 16)  0           tf_op_layer_add_6[0][0]          
__________________________________________________________________________________________________
res5b_branch2a (Conv2D)         (None, 300, 12, 16)  810300      re_lu_14[0][0]                   
__________________________________________________________________________________________________
bn5b_branch2a (BatchNormalizati (None, 300, 12, 16)  1200        res5b_branch2a[0][0]             
__________________________________________________________________________________________________
re_lu_15 (ReLU)                 (None, 300, 12, 16)  0           bn5b_branch2a[0][0]              
__________________________________________________________________________________________________
res5b_branch2b (Conv2D)         (None, 300, 12, 16)  810300      re_lu_15[0][0]                   
__________________________________________________________________________________________________
bn5b_branch2b (BatchNormalizati (None, 300, 12, 16)  1200        res5b_branch2b[0][0]             
__________________________________________________________________________________________________
tf_op_layer_add_7 (TensorFlowOp [(None, 300, 12, 16) 0           re_lu_14[0][0]                   
                                                                 bn5b_branch2b[0][0]              
__________________________________________________________________________________________________
re_lu_16 (ReLU)                 (None, 300, 12, 16)  0           tf_op_layer_add_7[0][0]          
__________________________________________________________________________________________________
permute_feature (Permute)       (None, 16, 12, 300)  0           re_lu_16[0][0]                   
__________________________________________________________________________________________________
flatten_feature (Reshape)       (None, 16, 3600)     0           permute_feature[0][0]            
__________________________________________________________________________________________________
lstm (LSTM)                     (None, 16, 512)      8423424     flatten_feature[0][0]            
__________________________________________________________________________________________________
td_dense (TimeDistributed)      (None, 16, 36)       18468       lstm[0][0]                       
__________________________________________________________________________________________________
softmax (Softmax)               (None, 16, 36)       0           td_dense[0][0]                   
==================================================================================================
Total params: 14,432,480
Trainable params: 14,424,872
Non-trainable params: 7,608
__________________________________________________________________________________________________
2022-03-16 07:31:56,939 [INFO] __main__: Number of images in the training dataset:	  3406
2022-03-16 07:31:56,939 [INFO] __main__: Number of images in the validation dataset:	   306
Epoch 1/120
b8d5f2bf9b47:132:531 [0] NCCL INFO Bootstrap : Using lo:127.0.0.1<0>
b8d5f2bf9b47:132:531 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
b8d5f2bf9b47:132:531 [0] NCCL INFO NET/IB : No device found.
b8d5f2bf9b47:132:531 [0] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0> [1]eth0:172.17.0.4<0>
b8d5f2bf9b47:132:531 [0] NCCL INFO Using network Socket
NCCL version 2.9.9+cuda11.3
b8d5f2bf9b47:135:530 [3] NCCL INFO Bootstrap : Using lo:127.0.0.1<0>
b8d5f2bf9b47:135:530 [3] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
b8d5f2bf9b47:135:530 [3] NCCL INFO NET/IB : No device found.
b8d5f2bf9b47:135:530 [3] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0> [1]eth0:172.17.0.4<0>
b8d5f2bf9b47:135:530 [3] NCCL INFO Using network Socket
b8d5f2bf9b47:134:525 [2] NCCL INFO Bootstrap : Using lo:127.0.0.1<0>
b8d5f2bf9b47:134:525 [2] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
b8d5f2bf9b47:134:525 [2] NCCL INFO NET/IB : No device found.
b8d5f2bf9b47:134:525 [2] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0> [1]eth0:172.17.0.4<0>
b8d5f2bf9b47:134:525 [2] NCCL INFO Using network Socket
b8d5f2bf9b47:133:524 [1] NCCL INFO Bootstrap : Using lo:127.0.0.1<0>
b8d5f2bf9b47:133:524 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
b8d5f2bf9b47:133:524 [1] NCCL INFO NET/IB : No device found.
b8d5f2bf9b47:133:524 [1] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0> [1]eth0:172.17.0.4<0>
b8d5f2bf9b47:133:524 [1] NCCL INFO Using network Socket
b8d5f2bf9b47:132:531 [0] NCCL INFO Channel 00/02 :    0   1   2   3
b8d5f2bf9b47:132:531 [0] NCCL INFO Channel 01/02 :    0   1   2   3
b8d5f2bf9b47:132:531 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1
b8d5f2bf9b47:134:525 [2] NCCL INFO Trees [0] 3/-1/-1->2->1 [1] 3/-1/-1->2->1
b8d5f2bf9b47:135:530 [3] NCCL INFO Trees [0] -1/-1/-1->3->2 [1] -1/-1/-1->3->2
b8d5f2bf9b47:133:524 [1] NCCL INFO Trees [0] 2/-1/-1->1->0 [1] 2/-1/-1->1->0
b8d5f2bf9b47:135:530 [3] NCCL INFO Channel 00 : 3[1e0] -> 0[1b0] via direct shared memory
b8d5f2bf9b47:134:525 [2] NCCL INFO Channel 00 : 2[1d0] -> 3[1e0] via direct shared memory
b8d5f2bf9b47:135:530 [3] NCCL INFO Channel 01 : 3[1e0] -> 0[1b0] via direct shared memory
b8d5f2bf9b47:134:525 [2] NCCL INFO Channel 01 : 2[1d0] -> 3[1e0] via direct shared memory
b8d5f2bf9b47:133:524 [1] NCCL INFO Channel 00 : 1[1c0] -> 2[1d0] via direct shared memory
b8d5f2bf9b47:132:531 [0] NCCL INFO Channel 00 : 0[1b0] -> 1[1c0] via direct shared memory
b8d5f2bf9b47:133:524 [1] NCCL INFO Channel 01 : 1[1c0] -> 2[1d0] via direct shared memory
b8d5f2bf9b47:132:531 [0] NCCL INFO Channel 01 : 0[1b0] -> 1[1c0] via direct shared memory
b8d5f2bf9b47:134:525 [2] NCCL INFO Connected all rings
b8d5f2bf9b47:133:524 [1] NCCL INFO Connected all rings
b8d5f2bf9b47:132:531 [0] NCCL INFO Connected all rings
b8d5f2bf9b47:134:525 [2] NCCL INFO Channel 00 : 2[1d0] -> 1[1c0] via direct shared memory
b8d5f2bf9b47:134:525 [2] NCCL INFO Channel 01 : 2[1d0] -> 1[1c0] via direct shared memory
b8d5f2bf9b47:135:530 [3] NCCL INFO Connected all rings
b8d5f2bf9b47:135:530 [3] NCCL INFO Channel 00 : 3[1e0] -> 2[1d0] via direct shared memory
b8d5f2bf9b47:135:530 [3] NCCL INFO Channel 01 : 3[1e0] -> 2[1d0] via direct shared memory
b8d5f2bf9b47:135:530 [3] NCCL INFO Connected all trees
b8d5f2bf9b47:135:530 [3] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512
b8d5f2bf9b47:133:524 [1] NCCL INFO Channel 00 : 1[1c0] -> 0[1b0] via direct shared memory
b8d5f2bf9b47:135:530 [3] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
b8d5f2bf9b47:133:524 [1] NCCL INFO Channel 01 : 1[1c0] -> 0[1b0] via direct shared memory
b8d5f2bf9b47:132:531 [0] NCCL INFO Connected all trees
b8d5f2bf9b47:132:531 [0] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512
b8d5f2bf9b47:132:531 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
b8d5f2bf9b47:135:530 [3] NCCL INFO comm 0x7f89c33d7a30 rank 3 nranks 4 cudaDev 3 busId 1e0 - Init COMPLETE
b8d5f2bf9b47:134:525 [2] NCCL INFO Connected all trees
b8d5f2bf9b47:134:525 [2] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512
b8d5f2bf9b47:134:525 [2] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
b8d5f2bf9b47:133:524 [1] NCCL INFO Connected all trees
b8d5f2bf9b47:133:524 [1] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512
b8d5f2bf9b47:133:524 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
b8d5f2bf9b47:133:524 [1] NCCL INFO comm 0x7fc38f3d7bc0 rank 1 nranks 4 cudaDev 1 busId 1c0 - Init COMPLETE
b8d5f2bf9b47:134:525 [2] NCCL INFO comm 0x7fb2ab3d7dc0 rank 2 nranks 4 cudaDev 2 busId 1d0 - Init COMPLETE
b8d5f2bf9b47:132:531 [0] NCCL INFO comm 0x7f29c73d9250 rank 0 nranks 4 cudaDev 0 busId 1b0 - Init COMPLETE
b8d5f2bf9b47:132:531 [0] NCCL INFO Launch mode Parallel
 1/26 [>.............................] - ETA: 4:39 - loss: 14.1041WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (1.276901). Check your callbacks.
2022-03-16 07:32:09,891 [WARNING] tensorflow: Method (on_train_batch_end) is slow compared to the batch update (1.276901). Check your callbacks.
WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (1.277774). Check your callbacks.
WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (1.276856). Check your callbacks.
2022-03-16 07:32:09,892 [WARNING] tensorflow: Method (on_train_batch_end) is slow compared to the batch update (1.277774). Check your callbacks.
2022-03-16 07:32:09,892 [WARNING] tensorflow: Method (on_train_batch_end) is slow compared to the batch update (1.276856). Check your callbacks.
 2/26 [=>............................] - ETA: 2:26 - loss: 16.4173WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (1.277739). Check your callbacks.
2022-03-16 07:32:09,892 [WARNING] tensorflow: Method (on_train_batch_end) is slow compared to the batch update (1.277739). Check your callbacks.
26/26 [==============================] - 19s 747ms/step - loss: 8.6648
Epoch 2/120
26/26 [==============================] - 5s 178ms/step - loss: 3.6388
Epoch 3/120
26/26 [==============================] - 4s 156ms/step - loss: 2.0142
Epoch 4/120
26/26 [==============================] - 5s 184ms/step - loss: 1.4376
Epoch 5/120
25/26 [===========================>..] - ETA: 0s - loss: 1.1230
Epoch 00005: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-05.tlt


*******************************************
Accuracy: 136 / 306  0.4444444444444444
*******************************************


26/26 [==============================] - 20s 774ms/step - loss: 1.1189
Epoch 6/120
26/26 [==============================] - 4s 158ms/step - loss: 0.9136
Epoch 7/120
26/26 [==============================] - 4s 157ms/step - loss: 0.7347
Epoch 8/120
26/26 [==============================] - 4s 157ms/step - loss: 0.6426
Epoch 9/120
26/26 [==============================] - 4s 157ms/step - loss: 0.5540
Epoch 10/120
25/26 [===========================>..] - ETA: 0s - loss: 0.4381
Epoch 00010: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-10.tlt


*******************************************
Accuracy: 144 / 306  0.47058823529411764
*******************************************


26/26 [==============================] - 10s 393ms/step - loss: 0.4368
Epoch 11/120
26/26 [==============================] - 4s 158ms/step - loss: 0.4816
Epoch 12/120
26/26 [==============================] - 4s 158ms/step - loss: 0.4345
Epoch 13/120
26/26 [==============================] - 4s 159ms/step - loss: 0.3703
Epoch 14/120
26/26 [==============================] - 4s 157ms/step - loss: 0.3692
Epoch 15/120
25/26 [===========================>..] - ETA: 0s - loss: 0.3712
Epoch 00015: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-15.tlt


*******************************************
Accuracy: 137 / 306  0.4477124183006536
*******************************************


26/26 [==============================] - 10s 398ms/step - loss: 0.3688
Epoch 16/120
26/26 [==============================] - 4s 160ms/step - loss: 0.3128
Epoch 17/120
26/26 [==============================] - 4s 159ms/step - loss: 0.3130
Epoch 18/120
26/26 [==============================] - 4s 158ms/step - loss: 0.3123
Epoch 19/120
26/26 [==============================] - 4s 159ms/step - loss: 0.2868
Epoch 20/120
25/26 [===========================>..] - ETA: 0s - loss: 0.2620
Epoch 00020: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-20.tlt


*******************************************
Accuracy: 140 / 306  0.45751633986928103
*******************************************


26/26 [==============================] - 10s 398ms/step - loss: 0.2646
Epoch 21/120
26/26 [==============================] - 4s 161ms/step - loss: 0.2479
Epoch 22/120
26/26 [==============================] - 4s 161ms/step - loss: 0.2541
Epoch 23/120
26/26 [==============================] - 4s 159ms/step - loss: 0.2508
Epoch 24/120
26/26 [==============================] - 4s 160ms/step - loss: 0.2376
Epoch 25/120
25/26 [===========================>..] - ETA: 0s - loss: 0.2465
Epoch 00025: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-25.tlt


*******************************************
Accuracy: 145 / 306  0.4738562091503268
*******************************************


26/26 [==============================] - 10s 402ms/step - loss: 0.2526
Epoch 26/120
26/26 [==============================] - 4s 163ms/step - loss: 0.2372
Epoch 27/120
26/26 [==============================] - 4s 161ms/step - loss: 0.2472
Epoch 28/120
26/26 [==============================] - 4s 159ms/step - loss: 0.2391
Epoch 29/120
26/26 [==============================] - 4s 161ms/step - loss: 0.2166
Epoch 30/120
25/26 [===========================>..] - ETA: 0s - loss: 0.2251
Epoch 00030: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-30.tlt


*******************************************
Accuracy: 141 / 306  0.46078431372549017
*******************************************


26/26 [==============================] - 10s 400ms/step - loss: 0.2250
Epoch 31/120
26/26 [==============================] - 4s 163ms/step - loss: 0.2216
Epoch 32/120
26/26 [==============================] - 4s 160ms/step - loss: 0.2216
Epoch 33/120
26/26 [==============================] - 4s 160ms/step - loss: 0.2035
Epoch 34/120
26/26 [==============================] - 4s 160ms/step - loss: 0.2025
Epoch 35/120
25/26 [===========================>..] - ETA: 0s - loss: 0.2105
Epoch 00035: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-35.tlt


*******************************************
Accuracy: 141 / 306  0.46078431372549017
*******************************************


26/26 [==============================] - 10s 400ms/step - loss: 0.2093
Epoch 36/120
26/26 [==============================] - 4s 162ms/step - loss: 0.1969
Epoch 37/120
26/26 [==============================] - 4s 160ms/step - loss: 0.2025
Epoch 38/120
26/26 [==============================] - 4s 160ms/step - loss: 0.2055
Epoch 39/120
26/26 [==============================] - 4s 159ms/step - loss: 0.2085
Epoch 40/120
25/26 [===========================>..] - ETA: 0s - loss: 0.1933
Epoch 00040: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-40.tlt


*******************************************
Accuracy: 138 / 306  0.45098039215686275
*******************************************


26/26 [==============================] - 10s 402ms/step - loss: 0.1941
Epoch 41/120
26/26 [==============================] - 4s 163ms/step - loss: 0.1989
Epoch 42/120
26/26 [==============================] - 4s 161ms/step - loss: 0.1984
Epoch 43/120
26/26 [==============================] - 4s 162ms/step - loss: 0.2044
Epoch 44/120
26/26 [==============================] - 4s 161ms/step - loss: 0.1922
Epoch 45/120
25/26 [===========================>..] - ETA: 0s - loss: 0.1894
Epoch 00045: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-45.tlt


*******************************************
Accuracy: 138 / 306  0.45098039215686275
*******************************************


26/26 [==============================] - 10s 400ms/step - loss: 0.1890
Epoch 46/120
26/26 [==============================] - 4s 162ms/step - loss: 0.1841
Epoch 47/120
26/26 [==============================] - 4s 162ms/step - loss: 0.1917
Epoch 48/120
26/26 [==============================] - 4s 161ms/step - loss: 0.1834
Epoch 49/120
26/26 [==============================] - 4s 162ms/step - loss: 0.1888
Epoch 50/120
25/26 [===========================>..] - ETA: 0s - loss: 0.1816
Epoch 00050: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-50.tlt


*******************************************
Accuracy: 137 / 306  0.4477124183006536
*******************************************


26/26 [==============================] - 10s 402ms/step - loss: 0.1815
Epoch 51/120
26/26 [==============================] - 4s 162ms/step - loss: 0.1785
Epoch 52/120
26/26 [==============================] - 4s 161ms/step - loss: 0.1821
Epoch 53/120
26/26 [==============================] - 4s 161ms/step - loss: 0.1787
Epoch 54/120
26/26 [==============================] - 4s 161ms/step - loss: 0.1890
Epoch 55/120
25/26 [===========================>..] - ETA: 0s - loss: 0.1756
Epoch 00055: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-55.tlt


*******************************************
Accuracy: 135 / 306  0.4411764705882353
*******************************************


26/26 [==============================] - 10s 403ms/step - loss: 0.1753
Epoch 56/120
26/26 [==============================] - 4s 163ms/step - loss: 0.1786
Epoch 57/120
26/26 [==============================] - 4s 161ms/step - loss: 0.1746
Epoch 58/120
26/26 [==============================] - 4s 161ms/step - loss: 0.1752
Epoch 59/120
26/26 [==============================] - 4s 160ms/step - loss: 0.1790
Epoch 60/120
25/26 [===========================>..] - ETA: 0s - loss: 0.1777
Epoch 00060: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-60.tlt


*******************************************
Accuracy: 141 / 306  0.46078431372549017
*******************************************


26/26 [==============================] - 10s 398ms/step - loss: 0.1786
Epoch 61/120
26/26 [==============================] - 4s 160ms/step - loss: 0.1779
Epoch 62/120
26/26 [==============================] - 4s 157ms/step - loss: 0.1788
Epoch 63/120
26/26 [==============================] - 4s 157ms/step - loss: 0.1752
Epoch 64/120
26/26 [==============================] - 4s 158ms/step - loss: 0.1749
Epoch 65/120
25/26 [===========================>..] - ETA: 0s - loss: 0.1766
Epoch 00065: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-65.tlt


*******************************************
Accuracy: 141 / 306  0.46078431372549017
*******************************************


26/26 [==============================] - 10s 395ms/step - loss: 0.1760
Epoch 66/120
26/26 [==============================] - 4s 159ms/step - loss: 0.1748
Epoch 67/120
26/26 [==============================] - 4s 156ms/step - loss: 0.1745
Epoch 68/120
26/26 [==============================] - 4s 157ms/step - loss: 0.1789
Epoch 69/120
26/26 [==============================] - 4s 156ms/step - loss: 0.1694
Epoch 70/120
25/26 [===========================>..] - ETA: 0s - loss: 0.1705
Epoch 00070: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-70.tlt


*******************************************
Accuracy: 138 / 306  0.45098039215686275
*******************************************


26/26 [==============================] - 10s 393ms/step - loss: 0.1705
Epoch 71/120
26/26 [==============================] - 4s 158ms/step - loss: 0.1707
Epoch 72/120
26/26 [==============================] - 4s 156ms/step - loss: 0.1678
Epoch 73/120
26/26 [==============================] - 4s 156ms/step - loss: 0.1710
Epoch 74/120
26/26 [==============================] - 4s 156ms/step - loss: 0.1687
Epoch 75/120
25/26 [===========================>..] - ETA: 0s - loss: 0.1642
Epoch 00075: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-75.tlt


*******************************************
Accuracy: 139 / 306  0.4542483660130719
*******************************************


26/26 [==============================] - 10s 394ms/step - loss: 0.1649
Epoch 76/120
26/26 [==============================] - 4s 158ms/step - loss: 0.1737
Epoch 77/120
26/26 [==============================] - 4s 155ms/step - loss: 0.1718
Epoch 78/120
26/26 [==============================] - 4s 156ms/step - loss: 0.1691
Epoch 79/120
26/26 [==============================] - 4s 157ms/step - loss: 0.1655
Epoch 80/120
25/26 [===========================>..] - ETA: 0s - loss: 0.1732
Epoch 00080: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-80.tlt


*******************************************
Accuracy: 140 / 306  0.45751633986928103
*******************************************


26/26 [==============================] - 10s 395ms/step - loss: 0.1726
Epoch 81/120
26/26 [==============================] - 4s 158ms/step - loss: 0.1770
Epoch 82/120
26/26 [==============================] - 4s 157ms/step - loss: 0.1660
Epoch 83/120
26/26 [==============================] - 4s 156ms/step - loss: 0.1653
Epoch 84/120
26/26 [==============================] - 4s 156ms/step - loss: 0.1705
Epoch 85/120
25/26 [===========================>..] - ETA: 0s - loss: 0.1669
Epoch 00085: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-85.tlt


*******************************************
Accuracy: 140 / 306  0.45751633986928103
*******************************************


26/26 [==============================] - 10s 390ms/step - loss: 0.1677
Epoch 86/120
26/26 [==============================] - 4s 157ms/step - loss: 0.1680
Epoch 87/120
26/26 [==============================] - 4s 156ms/step - loss: 0.1661
Epoch 88/120
26/26 [==============================] - 4s 155ms/step - loss: 0.1661
Epoch 89/120
26/26 [==============================] - 4s 156ms/step - loss: 0.1661
Epoch 90/120
25/26 [===========================>..] - ETA: 0s - loss: 0.1661
Epoch 00090: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-90.tlt


*******************************************
Accuracy: 141 / 306  0.46078431372549017
*******************************************


26/26 [==============================] - 10s 392ms/step - loss: 0.1664
Epoch 91/120
26/26 [==============================] - 4s 157ms/step - loss: 0.1658
Epoch 92/120
26/26 [==============================] - 4s 157ms/step - loss: 0.1708
Epoch 93/120
26/26 [==============================] - 4s 157ms/step - loss: 0.1721
Epoch 94/120
26/26 [==============================] - 4s 156ms/step - loss: 0.1667
Epoch 95/120
25/26 [===========================>..] - ETA: 0s - loss: 0.1676
Epoch 00095: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-95.tlt


*******************************************
Accuracy: 141 / 306  0.46078431372549017
*******************************************


26/26 [==============================] - 10s 391ms/step - loss: 0.1672
Epoch 96/120
26/26 [==============================] - 4s 159ms/step - loss: 0.1638
Epoch 97/120
26/26 [==============================] - 4s 157ms/step - loss: 0.1660
Epoch 98/120
26/26 [==============================] - 4s 157ms/step - loss: 0.1691
Epoch 99/120
26/26 [==============================] - 4s 157ms/step - loss: 0.1686
Epoch 100/120
25/26 [===========================>..] - ETA: 0s - loss: 0.1684
Epoch 00100: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-100.tlt


*******************************************
Accuracy: 141 / 306  0.46078431372549017
*******************************************


26/26 [==============================] - 10s 392ms/step - loss: 0.1682
Epoch 101/120
26/26 [==============================] - 4s 158ms/step - loss: 0.1709
Epoch 102/120
26/26 [==============================] - 4s 157ms/step - loss: 0.1691
Epoch 103/120
26/26 [==============================] - 4s 157ms/step - loss: 0.1667
Epoch 104/120
26/26 [==============================] - 4s 158ms/step - loss: 0.1655
Epoch 105/120
25/26 [===========================>..] - ETA: 0s - loss: 0.1643
Epoch 00105: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-105.tlt


*******************************************
Accuracy: 139 / 306  0.4542483660130719
*******************************************


26/26 [==============================] - 10s 393ms/step - loss: 0.1641
Epoch 106/120
26/26 [==============================] - 4s 161ms/step - loss: 0.1648
Epoch 107/120
26/26 [==============================] - 4s 158ms/step - loss: 0.1652
Epoch 108/120
26/26 [==============================] - 4s 158ms/step - loss: 0.1672
Epoch 109/120
26/26 [==============================] - 4s 158ms/step - loss: 0.1663
Epoch 110/120
25/26 [===========================>..] - ETA: 0s - loss: 0.1708
Epoch 00110: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-110.tlt


*******************************************
Accuracy: 139 / 306  0.4542483660130719
*******************************************


26/26 [==============================] - 10s 393ms/step - loss: 0.1703
Epoch 111/120
26/26 [==============================] - 4s 161ms/step - loss: 0.1723
Epoch 112/120
26/26 [==============================] - 4s 158ms/step - loss: 0.1624
Epoch 113/120
26/26 [==============================] - 4s 159ms/step - loss: 0.1643
Epoch 114/120
26/26 [==============================] - 4s 160ms/step - loss: 0.1665
Epoch 115/120
25/26 [===========================>..] - ETA: 0s - loss: 0.1657
Epoch 00115: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-115.tlt


*******************************************
Accuracy: 139 / 306  0.4542483660130719
*******************************************


26/26 [==============================] - 10s 395ms/step - loss: 0.1656
Epoch 116/120
26/26 [==============================] - 4s 161ms/step - loss: 0.1656
Epoch 117/120
26/26 [==============================] - 4s 160ms/step - loss: 0.1680
Epoch 118/120
26/26 [==============================] - 4s 161ms/step - loss: 0.1639
Epoch 119/120
26/26 [==============================] - 4s 160ms/step - loss: 0.1658
Epoch 120/120
25/26 [===========================>..] - ETA: 0s - loss: 0.1670
Epoch 00120: saving model to /workspace/tao-experiments/lprnet/experiment_dir_unpruned/weights/lprnet_epoch-120.tlt


*******************************************
Accuracy: 139 / 306  0.4542483660130719
*******************************************


26/26 [==============================] - 10s 398ms/step - loss: 0.1670
2022-03-16 07:43:09,962 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

Could you please set a higher max_rotate_degree and run some experiments?

Thanks, Morgan, we will give it a try. The reason we kept a lower rotate degree was that the dataset already had some rotation applied in offline augmentation, up to 20 degrees in both directions. Though we will try and retrain with higher rotate degree, could you please guide me on the following:

  1. Is there any OCR alternative available for LPRnet that plays nicely with Deepstream?
  2. I came across an open ALPR plugin here GitHub - openalpr/deepstream_jetson: OpenALPR Plug-in for DeepStream on Jetson which is not updated in 4 years. Is this still relevant to try?
  3. There is a Scene Text OCR available from Nvidia. Can this be used as an SGIE? GitHub - NVIDIA-AI-IOT/scene-text-recognition

Hi,
1), Currently, TAO does not support OCR yet. For LPRnet plaing with deepstream, see GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream
2), Not sure its status since we did not try it.
3) Could you try to run it when feed with some images?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.