Deepstream LPR and LPD sgie give incorrect results

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson Orin NX 16)
• DeepStream Version 7
• JetPack Version 6 36.3

I have a pipeling with yolov8s as PGIE and LPDnet and LPRnet as SGIE, but i’m getting wrong or empty results, the lpd bboxes aren’t correct, and the license plates detected too, they are either empty or only part of the license plate is detected, and the detections are wrong most of the time
I’m using deepstream test5 app
here are my config files:

PGIE: yolov8s model

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=/yolov8s/yolov8s-dependencies/yolov8s.onnx
model-engine-file=/yolov8s/model_b4_gpu0_int8.engine
int8-calib-file=/yolov8s/calib.table
labelfile-path=labels.txt
batch-size=4
network-mode=1
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#workspace-size=2000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=/yolov8s-files/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.5
pre-cluster-threshold=0.25
topk=300

SGIE0: LPD model

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=/yolov8s/yolov4/usa_lpd_label.txt
model-engine-file=/yolov8s/yolov4/yolov4_tiny_usa_deployable.etlt_b40_gpu0_int8.engine
int8-calib-file=/yolov8s/yolov4/yolov4_tiny_usa_cal.bin
tlt-encoded-model=/yolov8s/yolov4/yolov4_tiny_usa_deployable.etlt
tlt-model-key=nvidia_tlt
infer-dims=3;480;640

uff-input-blob-name=Input
batch-size=16
process-mode=2
network-mode=1
num-detected-classes=1
interval=0
gie-unique-id=1
is-classifier=0
network-type=0
operate-on-gie-id=1
#operate-on-class-ids=0
cluster-mode=3
output-blob-names=BatchedNMS
input-object-min-height=30
input-object-min-width=40
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/libnvds_infercustomparser.so

[class-attrs-all]
pre-cluster-threshold=0.45
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

SGIE1: LPR model

[property]
gpu-id=0
model-engine-file=/yolov8s/lpr-model/us_lprnet_baseline18_deployable.etlt_b40_gpu0_fp16.engine
labelfile-path=/yolov8s/lpr-model/labels_us.txt
#custom-parse-dictionary-file=/yolov8s/lpr-model/labels_us.txt
tlt-encoded-model=/yolov8s/lpr-model/us_lprnet_baseline18_deployable.etlt
tlt-model-key=nvidia_tlt
batch-size=16
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=3
gie-unique-id=3
output-blob-names=tf_op_layer_ArgMax;tf_op_layer_Max
#0=Detection 1=Classifier 2=Segmentation
network-type=1
parse-classifier-func-name=NvDsInferParseCustomNVPlate
custom-lib-path=/ds-config-files/yolov8s/deepstream_lpr_app/nvinfer_custom_lpr_parser/libnvdsinfer_custom_impl_lpr.so
process-mode=2
operate-on-gie-id=2
net-scale-factor=0.00392156862745098
#net-scale-factor=1.0
#0=RGB 1=BGR 2=GRAY
model-color-format=0

[class-attrs-all]
threshold=0.5

The LPD and LPR models are just pre-trained models which is trained with some US Carlifornia car license plate, are you testing with the pictures for US Carlifornia car license plate?

it is not just that the results are incorrect, but also the LPD bboxes are offset, like the screenshot below, sorry for the low resolution, i don’t have the stream and model on my host, but they’re on the same machine, so the stream passed to the model has 1280x720 resolution




is there anything to change in the config to fix this?

Please refer to NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream