Deepstream LPD and LPR models not detecting

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson Orin NX)
• DeepStream Version 7.0
• JetPack Version 6 36.3
• TensorRT Version

I have this pipeline in deepstream: yolov8s (PGIE) → LPD (SGIE0) → LPR (SGIE1)
the problem is that the lpd and lpr only detect the license plate if the vehicle is big enough


In the picture above, the license plate is not detected at all, while in the screenshot below it is detected when it gets very close:

Here are my configs:
LPD:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
labelfile-path=/yolov8s/yolov4/usa_lpd_label.txt
model-engine-file=/yolov8s/model/LPDNet_usa_pruned_tao5.onnx_b40_gpu0_int8.engine
onnx-file=/yolov8s/model/LPDNet_usa_pruned_tao5.onnx
int8-calib-file=/yolov8s/model/usa_cal_8.5.3.bin
#tlt-encoded-model=/yolov8s/
tlt-model-key=nvidia_tlt
infer-dims=3;480;640
#maintain-aspect-ratio=1
uff-input-dims=3;480;640;0
uff-input-order=0
uff-input-blob-name=input_1
batch-size=16
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=1
gie-unique-id=1
network-type=0
operate-on-gie-id=1
#operate-on-class-ids=0
#cluster-mode=3
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
input-object-min-height=30
input-object-min-width=40
classifier-async-mode=1
process-mode=2
is-classifier=1

[class-attrs-all]
pre-cluster-threshold=0.01
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

LPR :


[property]
gpu-id=0
model-engine-file=/yolov8s/lpr-model/us_lprnet_baseline18_deployable.etlt_b40_gpu0_fp16.engine
labelfile-path=/yolov8s/lpr-model/labels_us.txt
#custom-parse-dictionary-file=/yolov8s/lpr-model/labels_us.txt
tlt-encoded-model=/yolov8s/lpr-model/us_lprnet_baseline18_deployable.etlt
tlt-model-key=nvidia_tlt
batch-size=16
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=3
gie-unique-id=3
output-blob-names=tf_op_layer_ArgMax;tf_op_layer_Max
network-type=1
parse-classifier-func-name=NvDsInferParseCustomNVPlate
custom-lib-path=/ds-config-files/yolov8s/deepstream_lpr_app/nvinfer_custom_lpr_parser/libnvdsinfer_custom_impl_lpr.so
process-mode=2
operate-on-gie-id=2
net-scale-factor=0.00392156862745098
#net-scale-factor=1.0
#0=RGB 1=BGR 2=GRAY
model-color-format=0

[class-attrs-all]
threshold=0.65

Full config:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=4
square-seq-grid=1

[source-list]
num-source-bins=0
use-nvmultiurisrcbin=1
max-batch-size=4
http-ip=localhost
http-port=9010
sgie-batch-size=40
stream-name-display=1

[source-attr-all]
enable=1
type=3
num-sources=1
gpu-id=0
cudadec-memtype=0
latency=100
rtsp-reconnect-attempts=3
rtsp-reconnect-interval-sec=60
select-rtp-protocol=4

[streammux]
gpu-id=0
batch-size=4
batched-push-timeout=30000
width=960
height=544
enable-padding=0
nvbuf-memory-type=4
drop-pipeline-eos=1
live-source=1
attach-sys-ts-as-ntp=0
buffer-pool-size=4


[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=1
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=4

[sink1]
enable=1
msg-broker-conn-str=localhost;6379;test
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_redis_proto.so
msg-conv-msg2p-new-api=0
msg-conv-frame-interval=1
msg-broker-config=/ds-config-files/yolov8s/cfg_redis.txt
msg-conv-payload-type=1
source-id=0
sync=0
type=6
topic=test

[sink2]
enable=0
type=3
container=1
codec=3
sync=1
bitrate=2000000
output-file=out.mp4
source-id=0

[sink3]
enable=1
type=4
codec=1
enc-type=0
sync=0
bitrate=4000000
profile=0
rtsp-port=8555
udp-port=5511

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=4

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=4
config-file=config_infer_primary_yoloV8_nx16.txt
model-engine-file=/yolov8s/model_b4_gpu0_int8.engine
batch-size=4
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0

[tracker]
enable=1
tracker-width=960
tracker-height=544
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
ll-config-file=config_tracker_NvDCF_PNv2.6_Interval_1_PVA.yml;config_tracker_NvDCF_PNv2.6_Interval_1_PVA.yml
sub-batches=2:2
gpu-id=0
display-tracking-id=1

### LPD model
[secondary-gie0]
enable=1
#model-engine-file=/yolov8s/model/LPDNet_usa_pruned_tao5.onnx_b40_gpu0_int8.engine
gpu-id=0
batch-size=4
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=2;3;5;7
#operate-on-class-ids=0;1;2;3;4;5;6;7;8;9;10;11;12;13;14;15;16;17;18;19
config-file=lpd_yolov4-tiny_us.txt
#config-file=test.txt
#classifier-async-mode=1

# LPR model
[secondary-gie1]
enable=1
gpu-id=0
batch-size=4
gie-unique-id=5
operate-on-gie-id=4
operate-on-class-ids=0
config-file=lpr_config_sgie_us.txt

### VahicleMakeNet Model
[secondary-gie2]
enable=1
gpu-id=0
batch-size=4
gie-unique-id=6
operate-on-gie-id=1
operate-on-class-ids=2;3;5;7
config-file=config_infer_secondary_vehicleMake.txt


# VahicleTypeNet Model
[secondary-gie3]
enable=1
#model-engine-file=/yolov8s/model/LPDNet_usa_pruned_tao5.onnx_b40_gpu0_int8.engine
gpu-id=0
batch-size=4
gie-unique-id=7
operate-on-gie-id=1
operate-on-class-ids=2;3;5;7
#config-file=vehicle_typenet_config_sgie.txt
config-file=config_infer_secondary_vehicleType.txt

# VahicleColorNet Model
[secondary-gie4]
enable=1
gpu-id=0
batch-size=4
gie-unique-id=8
operate-on-gie-id=1
operate-on-class-ids=2;3;5;7
config-file=config_infer_secondary_vehicleColor.txt


[tracker2]
enable=1
tracker-width=960
tracker-height=544
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
ll-config-file=config_tracker_NvDCF_PNv2.6_Interval_1_PVA.yml;config_tracker_NvDCF_PNv2.6_Interval_1_PVA.yml
sub-batches=2:2
gpu-id=0
display-tracking-id=1


[tests]
file-loop=1

Let’s narrow that down first, because you’re using many models. Could you just run our deepstream_lpr_app sample to test the lpr result?
Also we recommend that you can upgrade DeepStream to our latest version 7.1.

i’m running deepstream 7 using the docker image, i can’t find any docker image for deepstream lpr app.

Does deepstream 7.1 require the new versions of sdr, emdx…?
Because i upgraded to the new version of VST, sdr and emdx, i had problems with the roi/fov/tripwire counts with sdr and emdx new versions, they were saving the latest records after a few minutes of their actual time, this was annoying cause i needed them in real-time

Yes. We recommend you upgrade Jetpack to version 6.1 directly. Then you can use DeepStream 7.1 docker and the latest deepstream_lpr_app sample.

ok, other than that, can you check my config files, is there anything i’m doing wrong there?

Sure. Could you also attach your config file of the pgie?

sure, here it is:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=/yolov8s/yolov8s-dependencies/yolov8s.onnx
model-engine-file=/yolov8s/model_b4_gpu0_int8.engine
int8-calib-file=/yolov8s/calib.table
labelfile-path=labels.txt
batch-size=4
network-mode=1
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#workspace-size=2000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=/yolov8s-files/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.5
pre-cluster-threshold=0.25
topk=300

From your configuration file, there is an error in the LPD section. You should set the gie-unique-id=2 in the config file of the LPD.

but i’m overwriting it in the app config files, and setting it to 4, does it still have an effect?

This value cannot be arbitrarily configured. This is a tag that sgie recognizes. If you set this to 4, you need to set the operate-on-gie-id to 4 in the sgie.