Creating a Real-Time License Plate Detection and Recognition App

I realized the the LPR can only recognize the rectangular license plate. How about the square license plate with 2 rows ? Can we re-train the model with a new dateset which contains two rows character images?

Currently, LPRnet does not support two lines of license plates.

1 Like

I am trying to follow the flow of this discussion and steps, but after splitting the initial data 80/20, This instruction fails: tlt detectnet_v2 dataset_convert -d /workspace/openalpr/SPECS_tfrecord.txt -o /workspace/openalpr/lpd_tfrecord/lpd

It seems to need docker and asks “Did you run docker login?” which may be assumed but not in the instruction flow.

Any help appreciated. Thanks.

Please create a new forum topic. Thanks.

To run LPDR (License Plate Detection and Recognition) i just need of jetson nano and camera borad V2 8MP (Raspberry cam)?

Jetson devices or the PCs with dgpu can run LPDR.
Can run inference with mp4 file.

I have a jetson nano and a camera board V2 8MP, with this i can run run Real-Time License Plate Detection and Recognition?.

For the fps, refer to Creating a Real-Time License Plate Detection and Recognition App | NVIDIA Developer Blog

Would you give us to improve the precision with using OpenALPR data?

I run the following command to start fine-tuning on the OpenALPR data in TLT container:

 $ detectnet_v2 train -e /workspace/openalpr/SPECS_train.txt -r /workspace/openalpr/exp_unpruned -k nvidia_tlt

But it did not reach your reference accuracy. Here is our re-train result.

Epoch 120/120

Validation cost: 0.000082
Mean average_precision (in %): 63.5910

class name      average precision (in %)
------------  --------------------------
lpd                               63.591

Here is the train spce data.

random_seed: 42
dataset_config {
  data_sources {
    tfrecords_path: "/workspace/output/lpd/experiments/lpd_tfrecord/*"
    image_directory_path: "/workspace/output/lpd/experiments/lpd/data/"
  image_extension: "jpg"
  target_class_mapping {
    key: "lpd"
    value: "lpd"
  validation_fold: 0
augmentation_config {
  preprocessing {
    output_image_width: 640
    output_image_height: 480
    min_bbox_width: 1.0
    min_bbox_height: 1.0
    output_image_channel: 3
  spatial_augmentation {
    hflip_probability: 0.5
    zoom_min: 1.0
    zoom_max: 1.0
    translate_max_x: 8.0
    translate_max_y: 8.0
  color_augmentation {
    hue_rotation_max: 25.0
    saturation_shift_max: 0.20000000298
    contrast_scale_max: 0.10000000149
    contrast_center: 0.5
postprocessing_config {
  target_class_config {
    key: "lpd"
    value {
      clustering_config {
        coverage_threshold: 0.00499999988824
        dbscan_eps: 0.20000000298
        dbscan_min_samples: 0.0500000007451
        minimum_bounding_box_height: 4
model_config {
  pretrained_model_file: "/workspace/output/lpd/experiments/pretrained/usa_unpruned.tlt"
  num_layers: 18
  use_batch_norm: true
  objective_set {
    bbox {
      scale: 35.0
      offset: 0.5
    cov {
  training_precision {
    backend_floatx: FLOAT32
  arch: "resnet"
evaluation_config {
  validation_period_during_training: 10
  first_validation_epoch: 1
  minimum_detection_ground_truth_overlap {
    key: "lpd"
    value: 0.699999988079
  evaluation_box_config {
    key: "lpd"
    value {
      minimum_height: 10
      maximum_height: 9999
      minimum_width: 10
      maximum_width: 9999
  average_precision_mode: INTEGRATE
cost_function_config {
  target_classes {
    name: "lpd"
    class_weight: 1.0
    coverage_foreground_weight: 0.0500000007451
    objectives {
      name: "cov"
      initial_weight: 1.0
      weight_target: 1.0
    objectives {
      name: "bbox"
      initial_weight: 10.0
      weight_target: 10.0
  enable_autoweighting: true
  max_objective_weight: 0.999899983406
  min_objective_weight: 9.99999974738e-05
training_config {
  batch_size_per_gpu: 4
  num_epochs: 120
  enable_qat: False
  learning_rate {
    soft_start_annealing_schedule {
      min_learning_rate: 5e-06
      max_learning_rate: 5e-04
      soft_start: 0.10000000149
      annealing: 0.699999988079
  regularizer {
    type: L1
    weight: 3.00000002618e-09
  optimizer {
    adam {
      epsilon: 9.99999993923e-09
      beta1: 0.899999976158
      beta2: 0.999000012875
  cost_scaling {
    initial_exponent: 20.0
    increment: 0.005
    decrement: 1.0
  checkpoint_interval: 10
bbox_rasterizer_config {
  target_class_config {
    key: "lpd"
    value {
      cov_center_x: 0.5
      cov_center_y: 0.5
      cov_radius_x: 0.40000000596
      cov_radius_y: 0.40000000596
      bbox_min_radius: 1.0
  deadzone_radius: 0.400000154972

Best regards.

How did you generate tfrecords? Since the openalpr only contains 222 images, the tfrecords generation take random images from it, it may result in different results via different tfrecords.

For improving mAP, you can try to run more epochs.

Here is the tfrecords file. I followed the blog.

kitti_config {
  root_directory_path: "/workspace/output/lpd/experiments/lpd/data"
  image_dir_name: "image"
  label_dir_name: "label"
  image_extension: ".jpg"
  partition_mode: "random"
  num_partitions: 2
  val_split: 20
  num_shards: 4

image_directory_path: "/workspace/output/lpd/experiments/lpd/data"

I increased the epoch sized from 120 to 240 but it seems that it is not effective.

Epoch 240/240

Validation cost: 0.000087
Mean average_precision (in %): 63.5292

class name      average precision (in %)
------------  --------------------------
lpd                              63.5292

Please create a new forum topic for discussing.

I see.

Hi there,
I need to use the Nvidia ALPR application with a USB camera not MP4 video file.
Has any one done this before? or knows the procedure on how to do this?
I looked into the app, it looks complicated and I don’t know how to change it to run on a USB camera.

Any help is greatly appreciated.

Best Regards,

Please create a new forum topic. Thanks.

I made. The topic name is:
Run Nvidia ALPR app on USB camera

Your assistance is appreciated.


Did you create the topic in TAO forum? TAO Toolkit - NVIDIA Developer Forums
If not, could you share the link where you have created the topic?

I posted now on the TAO forum.
This is the link.