SGIE classifier not showing in python

• Hardware Platform (Jetson Nano)
• DeepStream Version (6.0.1)

Hi, i have the problem in deepstream lpdnet and lprnet.
PGIE lpdnet detector can show the detector and parameter but SGIE lprnet didn’t show that.

However when i use this one deepstream_lpr_app it can show both without any problem.

I also see in the c code, is it the PGIE must be car detector, SGIE detector is lpdnet, and SGIE classifier is lprnet in order to get work?

Screenshot 2024-03-28 140339

Here is the config file PGIE lpdnet

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
model-engine-file=lpdnet120.etlt_b4_gpu0_fp16.engine
labelfile-path=lpd_label_id.txt
#tlt-encoded-model=resnet18_detector.etlt
tlt-encoded-model=lpdnet120.etlt
tlt-model-key=nvidia_tlt
int8-calib-file=id_lpd_cal.bin
uff-input-dims=3;480;640;0
uff-input-blob-name=input_1
batch-size=4
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=1
##1 Primary 2 Secondary
process-mode=1
interval=3
gie-unique-id=1
#0 detector 1 classifier 2 segmentation 3 instance segmentation
network-type=0
operate-on-gie-id=2
operate-on-class-ids=0
#no cluster
cluster-mode=3
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
input-object-min-height=30
input-object-min-width=40
#GPU:1  VIC:2(Jetson only)
scaling-compute-hw=2
#enable-dla=1

[class-attrs-all]
pre-cluster-threshold=0.25
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

And SGIE lprnet

[property]
gpu-id=0
model-engine-file=lprnet_deploy-40-ver3.etlt_b4_gpu0_fp16.engine
labelfile-path=dict.txt
tlt-encoded-model=lprnet_deploy-40-ver3.etlt
## tlt-encoded-model=resnet_lpd/us_lprnet_baseline18_deployable.etlt
tlt-model-key=nvidia_tlt
batch-size=4
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=35
gie-unique-id=3
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
#0=Detection 1=Classifier 2=Segmentation
network-type=1
parse-classifier-func-name=NvDsInferParseCustomNVPlate
custom-lib-path=../nvinfer_custom_lpr_parser/libnvdsinfer_custom_impl_lpr.so
process-mode=2
operate-on-gie-id=1
net-scale-factor=0.00392156862745098
#net-scale-factor=1.0
#0=RGB 1=BGR 2=GRAY
model-color-format=0

[class-attrs-all]
threshold=0.15

Thank You.

What is the license plate? For which country?

Did you add and linked the sgie in your pipeline?

Indonesia plate, and the model has already been trained using NVIDIA TAO Toolkit

Yes. The models should be used in this way.

Is it possible to removing the PGIE Car detection and replace it to License Plate Detection?

Like this
PGIE (LPD) -----> SGIE (LPR)

Not this
PGIE (Car Detection) -----> SGIE 1 (LPD) -----> SGIE 2 (LPR)

Depends on your re-trained LPD model. We don’t know anything about the LPD model you have re-trained. The pre-trained LPD model in LPDNet | NVIDIA NGC can only detect plates in the car pictures(the pictures have only one car). If you have modified and re-trained the model to support detect plates in arbitrary content pictures(E.G. The pictures with several cars and other backgroud objects), then you don’t need the car detection model.

The piepline and usage are decided by the models’ features.

Just found out the problem, when i run the deepstream-app -c deepstream_app_source1_trafficcamnet_lpd.txt it can show sgie, however sgie didn’t show in python.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
num-sources=1
uri=file:///home/development/Downloads/deepstream_lpr_app/deepstream-lpr-app/plate1.mp4
gpu-id=0

[streammux]
gpu-id=0
batch-size=1
batched-push-timeout=40000
## Set muxer output width and height
width=1280
height=720

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0

[osd]
enable=1
gpu-id=0
border-width=3
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial

[primary-gie]
enable=1
#model-engine-file=resnet_lpd/resnet18_detector.etlt_b4_gpu0_fp16.engine
model-engine-file=resnet_lpd/lpdnet120.etlt_b4_gpu0_fp16.engine
#(0): nvinfer; (1): nvinferserver
plugin-type=0
gpu-id=0
# Modify as necessary
batch-size=4
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
gie-unique-id=1
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=lpd_id_config.txt
#config-file=triton/config_infer_primary_trafficcamnet.txt
#config-file=triton-grpc/config_infer_primary_trafficcamnet.txt

[tracker]
enable=1
# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=480
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_IOU.yml
ll-config-file=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_DeepSORT.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1


[secondary-gie0]
enable=1
model-engine-file=resnet_lpd/lprnet_deploy-40-ver3.etlt_b4_gpu0_fp16.engine
#(0): nvinfer; (1): nvinferserver
plugin-type=0
gpu-id=0
batch-size=4
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=lpr_config_sgie_id.txt
#config-file=triton/config_infer_secondary_vehiclemakenet.txt
#config-file=triton-grpc/config_infer_secondary_vehiclemakenet.txt


[tests]
file-loop=1

[primary-gie] gie-unique-id=1 and [secondary-gie0] gie-unique-id=2
is it because the gie-unique-id “deepstream_app_source1_trafficcamnet_lpd.txt” is overriding the config “lpd_id_config.txt”, and “lpr_config_sgie_id.txt” ?

I try to change gie-unique-id “lpd_id_config.txt”, and “lpr_config_sgie_id.txt” similar to “deepstream_app_source1_trafficcamnet_lpd.txt” and run it on python but the error is come up like this

open dictionary file failed.
0:00:22.495235700  8116      0x882fa80 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<secondary-nvinference-engine> NvDsInferContext[UID 3]: Error in NvDsInferContextImpl::fillClassificationOutput() <nvdsinfer_context_impl_output_parsing.cpp:804> [UID = 3]: Failed to parse classification attributes using custom parse function
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
Aborted (core dumped)

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Yes. The app’s config will override the elements’ configuration.

There is postprocessing failure. Please refer to NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream (github.com) for the correct configuration.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.