The LPD and LPR models in the TAO tool do not work well

Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc)
Xavier
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc)
LPRnet
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
Get the model from NGC
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
Hi,guys:
In my test,the LPD and LPR models in the TAO tool do not work well,the result is as follows:

`./deepstream-lpr-app 2 2 0 ch_car_test.mp4 ch_car_test.mp4 output.264`

I have used a lot of videos for testing, and it is difficult for LPDnet to correctly recognize the license plate. Any ideas? thanks.

Can you share your spec file? BTW, for Chinese license plate, please use ccpd model.

Hi, Morganh:
I just refer to the introduction of this project:deepstream_lpr_app.
I run this command :
./download_ch.sh
And then download the pre-trained model from NGC,I did not use TAO to train custom LPDnet and LPRnet, so my spec file is useless.
I am sure I am using the ccpd model. Another question is whether the ccpd model has been tested? Where can I download the official test video.

Sorry, I mean the config files when you config in Deepstream. Please try to config in fp32 mode or fp16 check the result. The default is int8 mode.
For ccpd model, there is no official test video. But we test it internally against part of CCPD dataset and it works. See LPD model card NVIDIA NGC . There are accuracy result for ccpd model. More, you can refer to the steps mentioned in " Instructions to deploy these models with DeepStream". It will run inference with deepstream using tracficcamnet model as 1st engine and lpd model as 2nd engine.

Hi,Morganh:
I modified the following configuration, but the test result is still bad:

BTW,I tested ‘ccpd_unpruned.tlt’ in TAO, it can’t recognize the license plate correctly, can you give me some suggestions,?The following is inference_spec.txt:

inferencer_config{
  # defining target class names for the experiment.
  # Note: This must be mentioned in order of the networks classes.
  target_classes: "lpd"
  # Inference dimensions.
  image_width: 1248
  image_height: 384
  # Must match what the model was trained for.
  image_channels: 3
  batch_size: 16
  gpu_index: 0
  # model handler config
  tlt_config{
    model: "/workspace/tao-experiments/detectnet_v2/experiment_dir_retrain/weights/ccpd_unpruned.tlt"
  }
}
bbox_handler_config{
  kitti_dump: true
  disable_overlay: false
  overlay_linewidth: 2
  classwise_bbox_handler_config{
    key:"lpd"
    value: {
      confidence_model: "aggregate_cov"
      output_map: "lpd"
      bbox_color{
        R: 0
        G: 255
        B: 0
      }
      clustering_config{
        clustering_algorithm: DBSCAN
        coverage_threshold: 0.005
        dbscan_eps: 0.3
        dbscan_min_samples: 0.05
        dbscan_confidence_threshold: 0.9
        minimum_bounding_box_height: 4
      }
    }
  }
}

The config file is not correct. This is an issue in it. Please modify

uff-input-dims=3;480;640;0

to

uff-input-dims=3;1168;720;0

And also modify

input-object-min-height=30
input-object-min-width=40

to

input-object-min-height=73
input-object-min-width=45

Also, for your tao infer spec file,
please modify

image_width: 1248
image_height: 384

to

image_width: 720
image_height: 1168

Thank you very much. The CCPD model can correctly inference the pictures downloaded through Google, but the LPDnet cannot correctly recognize the pictures and videos taken with the mobile phone. Any ideas?

Any difference between Google’s pictures and mobile phone’s pictures?

I think there is no difference between them. When I put the video into the deepstream-lpr-app test, LPDnet cannot locate the license plate position correctly, but the result is correct when I use LPDnet to infer the screenshot of the vehicle. I think there is still an issue in lpd_ccpd_config. My configuration file is as follows. Any ideas?

4

################################################################################
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

# Following properties are mandatory when engine files are not specified:
#   int8-calib-file(Only in INT8)
#   Caffemodel mandatory properties: model-file, proto-file, output-blob-names
#   UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
#   ONNX: onnx-file
#
# Mandatory properties for detectors:
#   num-detected-classes
#
# Mandatory properties for classifiers:
#   classifier-threshold, is-classifier
#
# Optional properties for classifiers:
#   classifier-async-mode(Secondary mode only, Default=false)
#
# Following properties are always recommended:
#   batch-size(Default=1)
#
# Other optional properties:
#   net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
#   mean-file, gie-unique-id(Default=0), offsets, gie-mode (Default=1 i.e. primary),
#   custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=1
labelfile-path=../models/LP/LPD/ccpd_label.txt
tlt-encoded-model=../models/LP/LPD/ccpd_pruned.etlt
tlt-model-key=nvidia_tlt
model-engine-file=../models/LP/LPD/ccpd_pruned.etlt_b16_gpu0_int8.engine
int8-calib-file=../models/LP/LPD/ccpd_cal.bin
uff-input-dims=3;1168;720;0
uff-input-blob-name=input_1
batch-size=16
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=1
##1 Primary 2 Secondary
process-mode=2
interval=0
gie-unique-id=2
#0 detector 1 classifier 2 segmentatio 3 instance segmentation
network-type=0
operate-on-gie-id=1
operate-on-class-ids=0
#no cluster
cluster-mode=3
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
input-object-min-height=73
input-object-min-width=45
#GPU:1  VIC:2(Jetson only)
#scaling-compute-hw=2
#enable-dla=1

[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0


Please try below lower pre-cluster-threshold, for example,

     pre-cluster-threshold=0.05

Then, try fp16 or fp32 mode.

I followed your suggestion lower pre-cluster-threshold and tried fp32 mode, the result is still not good, as shown below:

Could you add below and retry?

maintain-aspect-ratio=1

LPDnet works sometimes, but it is still bad.

So, the tlt/tao inference is good but deepstream inference is not good. I will check further.
Is it possible to share a small video clip?

More, please try to use usa lpd model as well.
Here is the video we shared previously in the blog. License plate detection and recognition demo using NVIDIA pre-trained models - YouTube

I will continue to try to use the lpd model. Could you please provide an email address? I will send video clip to it soon.

I already send a forum message to you. You can attach video clip or a link for it. Thanks.

Please set model-color-format=0 . It works on my side.

Thank you, it also works on my side.So what is model-color-format means.

model-color-format: Color format required by the model
0: RGB
1: BGR
2: GRAY

1 Like