NVIDIA-AI-IOT / deepstream_lpr_app is not working when only using LPD and LPR model

Hello this is Akash Singh, I have been using deepstream fro a year now and currently facing a problem.
**• Issue Type: NVIDIA-AI-IOT / deepstream_lpr_app is not working when only using LPD and LPR model but working when using trafficamnet model + LPD and LPR model

Below is the Hardware specification that i am using:

**• Hardware Platform (Jetson / GPU): Jetson Nano
**• DeepStream Version: 6.0.1
**• JetPack Version (valid for Jetson only): 4.6.3 [L4T 32.7.3]
**• TensorRT Version: 8.2.1.9
**• CUDA Version: 10.2.300
**• cuDNN Version: 8.2.1.32
**• Python Version: 3.6.9
**• Model: NVIDIA Jetson Nano Developer Kit

I am using GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream for my project to detect CAR Number plate and recognize its number.
The problem i am facing is that when using trafficamnet model + LPD and LPR model all these three models i am able to extract ocr using thee probe function. But when using only LPD and LPR model the probe function is unable to return the ocr.

I have checked
l_class = obj_meta.classifier_meta_list ### l_class is empty when only using LPD and LPR model
### But contain ocr values when using trafficamnet model + LPD and LPR model In the following probe code:

def osd_sink_pad_buffer_probe(pad,info,u_data):
frame_number=0
num_rects=0
gst_buffer = info.get_buffer()
if not gst_buffer:
print("Unable to get GstBuffer ")
return

lp_dict = {}
# Retrieve batch metadata from the gst_buffer
# Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
# C address of gst_buffer as input, which is obtained with hash(gst_buffer)
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
    try:
        # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
        # The casting is done by pyds.NvDsFrameMeta.cast()
        # The casting also keeps ownership of the underlying memory
        # in the C code, so the Python garbage collector will leave
        # it alone.
        frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
    except StopIteration:
        break

    '''
    print("Frame Number is ", frame_meta.frame_num)
    print("Source id is ", frame_meta.source_id)
    print("Batch id is ", frame_meta.batch_id)
    print("Source Frame Width ", frame_meta.source_frame_width)
    print("Source Frame Height ", frame_meta.source_frame_height)
    print("Num object meta ", frame_meta.num_obj_meta)
    '''
    frame_number=frame_meta.frame_num
    l_obj=frame_meta.obj_meta_list
    num_rects = frame_meta.num_obj_meta

    while l_obj is not None:
        try: 
            # Casting l_obj.data to pyds.NvDsObjectMeta
            obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
        except StopIteration:
            break

        if True:
            #no ROI
            l_class = obj_meta.classifier_meta_list    ###   l_class is empty when only using LPD and LPR model
                                                         ###  But contain ocr values when using trafficamnet model + LPD and LPR model
            while l_class is not None:
                try:
                    class_meta = pyds.NvDsClassifierMeta.cast(l_class.data)
                except StopIteration:
                    break

                l_label = class_meta.label_info_list

                while l_label is not None:
                    try:
                        label_info = pyds.NvDsLabelInfo.cast(l_label.data)
                    except StopIteration:
                        break

                    print("Current OCR ",label_info.result_label)
                
                    try:
                        l_label=l_label.next
                    except StopIteration:
                        break
                try:
                    l_class=l_class.next
                except StopIteration:
                    break

The reason why i am using two models instead of 3 (LPR and LPD instead of trafficamnet + LPD and LPR) is to get more fps, reduce Ram , resources consumption and there are also other programs running along with deepstream app in my case.
Can you guide me through, what changes do i need to perform in probe function and lpr_parser program in order to get the ocr value (when using only LPD and LPR models).

The pre-trained LPD model is trained by car pictures( this means the pictures with only one car in them and the car is in full size image). It will not detect anything if you input the images with many things. The application’s behaviour is decided by the models. Why did you use “LPD+LPR” only?

Thanks for your reply.
I have also tried the LPR model with LPD (YOLOv4). The yolov4 is returning the bounding boxes of the number plate but the lpr is again returning nothing.

In this case The detection in taking place but The LPR is not working. what changes do i need to perform in order for LPR to work with yolov4

.

Do you use the US car picture or Chinese car picture? There are only US (California) car plate pre-trained LPR model or Chinese(Mainland) car plate pre-trained LPR model.

I have used US car images model

Are you using the pre-trained models( yolov4 LPD and LPR) in the NGC link? Have you set LPR model as SGIE?

Yes i am using the pre-trained models( yolov4 LPD and LPR) in the NGC link and Have you set LPR model as SGIE

Please upload the nvinfer configuration files for both models.

Below are the config files i am using :

PGIE:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=1
tlt-model-key=nvidia_tlt
labelfile-path=./models/LP/LPD/usa_lpd_label.txt
tlt-encoded-model=./models/LP/LPD/yolov4_tiny_usa_deployable.etlt
model-engine-file=./models/LP/LPD/yolov4_tiny_usa_deployable.etlt_b1_gpu0_fp32.engine
int8-calib-file=./models/LP/LPD/yolov4_tiny_usa_cal.bin
uff-input-dims=3;480;640;0
uff-input-blob-name=input_1
batch-size=1

0=FP32, 1=INT8, 2=FP16 mode

network-mode=0
num-detected-classes=1
##1 Primary 2 Secondary
process-mode=2
interval=0
gie-unique-id=2
#0 detector 1 classifier 2 segmentatio 3 instance segmentation
network-type=0
operate-on-gie-id=1
operate-on-class-ids=0
#no cluster
cluster-mode=3
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/home/deepstream-python/TAO/libnvds_infercustomparser_tao.so

input-object-min-height=30
input-object-min-width=40
#GPU:1 VIC:2(Jetson only)
#scaling-compute-hw=2
#enable-dla=1

[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

SGIE:

[property]
gpu-id=0

model-engine-file=./models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b1_gpu0_fp16.engine
#model-engine-file=./models/LP/LPR/lpr_us_onnx_b16.engine
labelfile-path=./models/LP/LPR/us_lp_characters.txt
tlt-encoded-model=./models/LP/LPR/us_lprnet_baseline18_deployable.etlt
tlt-model-key=nvidia_tlt

batch-size=1

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2
num-detected-classes=3
gie-unique-id=2
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
#0=Detection 1=Classifier 2=Segmentation
network-type=1
parse-classifier-func-name=NvDsInferParseCustomNVPlate
custom-lib-path=./nvinfer_custom_lpr_parser/libnvdsinfer_custom_impl_lpr.so
process_mode=2
operate-on-gie-id=2
operate-on-class-ids=0
net-scale-factor=0.00392156862745098
#net-scale-factor=1.0
#0=RGB 1=BGR 2=GRAY
model-color-format=0

[class-attrs-all]
threshold=0.5

below is the The output:

*** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

Starting pipeline

Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:06.254158542 31803 0x2d5e3070 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 2]: deserialized trt engine from :/home/deepstream-lpr-python-version/models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT image_input 3x48x96
1 OUTPUT kINT32 tf_op_layer_ArgMax 24
2 OUTPUT kFLOAT tf_op_layer_Max 24

ERROR: [TRT]: 3: Cannot find binding of given name: output_bbox/BiasAdd
0:00:06.255501971 31803 0x2d5e3070 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1868> [UID = 2]: Could not find output layer ‘output_bbox/BiasAdd’ in engine
ERROR: [TRT]: 3: Cannot find binding of given name: output_cov/Sigmoid
0:00:06.255573692 31803 0x2d5e3070 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1868> [UID = 2]: Could not find output layer ‘output_cov/Sigmoid’ in engine
0:00:06.255600984 31803 0x2d5e3070 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 2]: Use deserialized engine model: /home/deepstream-lpr-python-version/models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b1_gpu0_fp16.engine
0:00:06.281809750 31803 0x2d5e3070 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 2]: Load new model:lpr_config_sgie_us.txt sucessfully
0:00:06.962328419 31803 0x2d5e3070 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 2]: deserialized trt engine from :/home/deepstream-lpr-python-version/models/LP/LPD/yolov4_tiny_usa_deployable.etlt_b1_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 5
0 INPUT kFLOAT Input 3x480x640
1 OUTPUT kINT32 BatchedNMS 1
2 OUTPUT kFLOAT BatchedNMS_1 200x4
3 OUTPUT kFLOAT BatchedNMS_2 200
4 OUTPUT kFLOAT BatchedNMS_3 200

0:00:06.963544395 31803 0x2d5e3070 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 2]: Use deserialized engine model: /home/deepstream-lpr-python-version/models/LP/LPD/yolov4_tiny_usa_deployable.etlt_b1_gpu0_fp32.engine
0:00:06.970681293 31803 0x2d5e3070 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 2]: Load new model:lpd_us_config.txt sucessfully
0:00:06.970833382 31803 0x2d5e3070 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:07.540823363 31803 0x2d5e3070 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/deepstream-lpr-python-version/models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 4x34x60

0:00:07.542014912 31803 0x2d5e3070 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/deepstream-lpr-python-version/models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_fp32.engine
0:00:07.556511788 31803 0x2d5e3070 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:trafficamnet_config.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
Current OCR
H264: Profile = 66, Level = 0
Current OCR
NVMEDIA_ENC: bBlitMode is set to TRUE
Current OCR
Current OCR
Current OCR
Current OCR
Current OCR

You can’t use both models as SGIEs. And they even have the same “gie-unique-id”. DeepStream can’t identify them.

The detector should be used as PGIE.

The classifier should be used as SGIE and identify the “operate-on-gie-id” to the PGIE.

this is the updated files:

PGIE:

[property]
gpu-id=0
net-scale-factor=1.0
offsets=77.5;21.2;11.8
model-color-format=0
#model-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel
#proto-file=…/…/…/…/samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=/home/deepstream-python/models/yolov4_tiny_usa_deployable.etlt_b1_gpu0_fp16.engine
labelfile-path=/home/deepstream-python/TAO/labels_lpdnet.txt
int8-calib-file=/home/deepstream-python/TAO/yolov4_tiny_usa_cal.bin
gie-unique-id=1
#operate-on-gie-id=1
#operate-on-class-ids=0
force-implicit-batch-dim=1
batch-size=1
process-mode=1
network-mode=1
num-detected-classes=1
interval=1
gie-unique-id=1
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/home/deepstream-python/TAO/libnvds_infercustomparser_tao.so
#scaling-compute-hw=0
maintain-aspect-ratio=0
cluster-mode=4
output-tensor-meta=0
network-type=0

[class-attrs-all]
pre-cluster-threshold=0.7
eps=0.7
#group-threshold=1

dbscan-min-score=0.7
nms-iou-threshold=0.7
topk=1

SGIE:

[property]
gpu-id=0
model-engine-file=/home/deepstream-python/models/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine
labelfile-path=/home/deepstream-python/models/us_lp_characters.txt
tlt-encoded-model=/home/deepstream-python/models/us_lprnet_baseline18_deployable.etlt
tlt-model-key=nvidia_tlt
batch-size=16

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2
#num-detected-classes=3
gie-unique-id=2
uff-input-blob-name=image_input
output-blob-names=sequential_20/re_lu_15/Relu:0
#0=Detection 1=Classifier 2=Segmentation
network-type=1
parse-classifier-func-name=NvDsInferParseCustomNVPlate
custom-lib-path=/home/deepstream-python/nvinfer_custom_lpr_parser/libnvdsinfer_custom_impl_lpr.so
process-mode=2
operate-on-gie-id=2
net-scale-factor=0.00392156862745098
#net-scale-factor=1.0
#0=RGB 1=BGR 2=GRAY
model-color-format=0
#output-tensor-meta=true
[class-attrs-all]
threshold=0.5

Output:

*** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/test ***

Opening in BLOCKING MODE
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
0:00:07.140409847 15986 0x24798b80 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineA
ndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 2]: deserialized trt engine from :/home/deepstream-python/models/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine
INFO: [FullDims Engine Info]: layers num: 3
0 INPUT kFLOAT image_input 3x48x96 min: 1x3x48x96 opt: 16x3x48x96 Max: 16x3x48x96
1 OUTPUT kINT32 tf_op_layer_ArgMax 24 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT tf_op_layer_Max 24 min: 0 opt: 0 Max: 0

ERROR: [TRT]: 3: Cannot find binding of given name: sequential_20/re_lu_15/Relu:0
0:00:07.141768693 15986 0x24798b80 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::checkBackendPar
ams() <nvdsinfer_context_impl.cpp:1868> [UID = 2]: Could not find output layer ‘sequential_20/re_lu_15/Relu:0’ in engine
0:00:07.141814215 15986 0x24798b80 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendCon
text() <nvdsinfer_context_impl.cpp:2004> [UID = 2]: Use deserialized engine model: /home/deepstream-python/models/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine
0:00:07.178835557 15986 0x24798b80 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 2]: Load new model:config/sgie_config_ResNet.txt sucessfully
0:00:07.807342098 15986 0x24798b80 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAnd
Backend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/deepstream-python/models/yolov4_tiny_usa_deployable.etlt_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 5
0 INPUT kFLOAT Input 3x480x640
1 OUTPUT kINT32 BatchedNMS 1
2 OUTPUT kFLOAT BatchedNMS_1 200x4
3 OUTPUT kFLOAT BatchedNMS_2 200
4 OUTPUT kFLOAT BatchedNMS_3 200

0:00:07.808493072 15986 0x24798b80 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendConte
xt() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/deepstream-python/models/yolov4_tiny_usa_deployable.etlt_b1_gpu0_fp16.engine
0:00:07.828713460 15986 0x24798b80 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:config/yolo_v4_tiny_tao_pgie_config.txt suces
sfully

NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261

NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4

H264: Profile = 66, Level = 0
NVMEDIA_ENC: bBlitMode is set to TRUE

The PGIE parameters are quite different to our sample deepstream_lpr_app/lpd_yolov4-tiny_us.txt at master · NVIDIA-AI-IOT/deepstream_lpr_app · GitHub

Your PGIE unique id is 1, why did you set the SGIE to work on unique id 2?

Please check your parameters by yourself. We already provided samples for PGIE+SGIE pipeline and samples for LPD&LPR models.

thanks for your support i will work on it

i have updated the parameters but getting none value for l_class :

while l_obj is not None:
        try: 
            # Casting l_obj.data to pyds.NvDsObjectMeta
            obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
        except StopIteration:
            break

        if True:
            #no ROI
            l_class = obj_meta.classifier_meta_list 

i am using pre-trained us car numberplate yolov4 model as pgie (detector) and pre-trained us car numberplate lpr model as sgie (classifier).

The LPD model is working i am getting bounding box.

Seems it is not a California license plate.

yes i am using it for indian number plates.

The model is returning OCR values when using trafficamnet model + LPD (DetectNet_v2) and LPR model.
but currently i am using pre-trained us car numberplate yolov4 model as pgie (detector) and pre-trained us car numberplate lpr model as sgie (classifier).

Can you post the new configuration files?

The model is trained with US California car pictures. The LPD does not output good car plate bboxes on complicated pictures. This is why we involve car detection model as PGIE.

The model is trained with US California car pictures. The LPD does not output good car plate bboxes on complicated pictures. This is why we involve car detection model as PGIE.

we conducted experiments using the same video file with two different combinations of models. The first combination involved the TrafficCamNet model for vehicle detection along with LPD (DetectNet_v2) for license plate detection. Although this combination OCR capabilities, the frame rate was relatively low, ranging from 5 to 7 frames per second…

In contrast, the second combination utilized LPD with YOLOv4 for license plate detection. This combination exhibited higher accuracy (with 20-25 fps) and did not miss any number plates, even in low-quality video inputs.
So, currently we are trying to use LPD (YOLOv4) with LPR mode.

PGIE:

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
#model-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel
#proto-file=…/…/…/…/samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=/home/deepstream-python/models/yolov4_tiny_usa_deployable.etlt_b1_gpu0_fp16.engine
labelfile-path=/home/deepstream-python/TAO/labels_lpdnet.txt
int8-calib-file=/home/deepstream-python/TAO/yolov4_tiny_usa_cal.bin
tlt-model-key=nvidia_tlt
infer-dims=3;480;640
maintain-aspect-ratio=1
uff-input-order=0
uff-input-blob-name=Input
batch-size=1

network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
network-type=0
cluster-mode=3
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/home/deepstream-python/TAO/libnvds_infercustomparser_tao.so
[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

SGIE:

[property]
gpu-id=0
model-engine-file=/home/deepstream-python/models/lpr_us_onnx_b16.engine
labelfile-path=/home/deepstream-python/models/us_lp_characters.txt
tlt-encoded-model=/home/deepstream-python/models/us_lprnet_baseline18_deployable.etlt
tlt-model-key=nvidia_tlt
batch-size=16
network-mode=2
num-detected-classes=3
gie-unique-id=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
#0=Detection 1=Classifier 2=Segmentation
network-type=1
parse-classifier-func-name=NvDsInferParseCustomNVPlate
custom-lib-path=/home/deepstream-python/nvinfer_custom_lpr_parser/libnvdsinfer_custom_impl_lpr.so
process-mode=2
operate-on-gie-id=2
net-scale-factor=0.00392156862745098
#net-scale-factor=1.0
#0=RGB 1=BGR 2=GRAY
model-color-format=0

[class-attrs-all]
threshold=0.5

Such bbox is preferred.

I don’t think the pre-trained LPR model is trained with the India car plate fonts.