LPD/LPR improvable?

dGPU
DS7

I’m wondering, if there would be a configuration option amongst the many below (LPD configuration), which would allow to slightly resize the BBox of the LPD classifier, before it goes into LPR, in order to approve the results.

################################################################################
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
#labelfile-path=./models/tao_pretrained_models/yolov4-tiny/usa_lpd_label.txt
model-engine-file=./models/tao_pretrained_models/yolov4-tiny/yolov4_tiny_usa_deployable.etlt_b16_gpu0_int8.engine
int8-calib-file=./models/tao_pretrained_models/yolov4-tiny/yolov4_tiny_usa_cal.bin
tlt-encoded-model=./models/tao_pretrained_models/yolov4-tiny/yolov4_tiny_usa_deployable.etlt
tlt-model-key=nvidia_tlt
infer-dims=3;480;640
maintain-aspect-ratio=1
uff-input-order=0
uff-input-blob-name=Input
batch-size=16
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=2
is-classifier=0
#network-type=0
cluster-mode=3
output-blob-names=BatchedNMS
#if scaling-compute-hw = VIC, input-object-min-height need to be even and greater than or equal to (model height)/16
input-object-min-height=30
#if scaling-compute-hw = VIC, input-object-min-width need to be even and greater than or equal to( model width)/16
input-object-min-width=40
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/libnvds_infercustomparser.so
layer-device-precision=cls/mul:fp32:gpu;box/mul_6:fp32:gpu;box/add:fp32:gpu;box/mul_4:fp32:gpu;box/add_1:fp32:gpu;cls/Reshape_reshape:fp32:gpu;box/Reshape_reshape:fp32:gpu;encoded_detections:fp32:gpu;bg_leaky_conv1024_lrelu:fp32:gpu;sm_bbox_processor/concat_concat:fp32:gpu;sm_bbox_processor/sub:fp32:gpu;sm_bbox_processor/Exp:fp32:gpu;yolo_conv1_4_lrelu:fp32:gpu;yolo_conv1_3_1_lrelu:fp32:gpu;md_leaky_conv512_lrelu:fp32:gpu;sm_bbox_processor/Reshape_reshape:fp32:gpu;conv_sm_object:fp32:gpu;yolo_conv5_1_lrelu:fp32:gpu;concatenate_6:fp32:gpu;yolo_conv3_1_lrelu:fp32:gpu;concatenate_5:fp32:gpu;yolo_neck_1_lrelu:fp32:gpu

[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

I’m having a plate which is constantly and repeatedly recognized as 51DRVV, even though anybody can see it is a 61DRVV. I suppose it has to do with the little too small width of the LPD result (top/left should be slightly moved left)

image

Where does your car license plate belong to(the country and the state)?

USA/FL

You may need to retrain the model with the data from FL

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.