No output from Deepstream 7 on AINVR app with SGIE running LPD model

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson Orin NX16)
**• DeepStream Version 7 **
• JetPack Version 6 36.3
I am trying to run LPD model (from nvidia https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/lpdnet/files) on deepstream 7, but i’m not getting any results
Here are the config files that i’m using:

yolov8s_config_file_nx16.txt:

################################################################################

Copyright (c) 2018-2024, NVIDIA CORPORATION. All rights reserved.

Permission is hereby granted, free of charge, to any person obtaining a

copy of this software and associated documentation files (the “Software”),

to deal in the Software without restriction, including without limitation

the rights to use, copy, modify, merge, publish, distribute, sublicense,

and/or sell copies of the Software, and to permit persons to whom the

Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in

all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR

IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,

FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL

THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER

LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING

FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER

DEALINGS IN THE SOFTWARE.

################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

#Note: [source-list] now support REST Server with use-nvmultiurisrcbin=1
[source-list]
num-source-bins=0
#list=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4;file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4
use-nvmultiurisrcbin=1
#sensor-id-list vector is one to one mapped with the uri-list
#identifies each sensor by a unique ID
#sensor-id-list=UniqueSensorId1;UniqueSensorId2
max-batch-size=4
http-ip=localhost
http-port=9010
#sgie batch size is number of sources * fair fraction of number of objects detected per frame per source
#the fair fraction of number of object detected is assumed to be 4
sgie-batch-size=40
set the below key to keep the application running at all times
stream-name-display=1

[source-attr-all]
enable=1
type=3
num-sources=1
gpu-id=0
cudadec-memtype=0
latency=100000
rtsp-reconnect-interval-sec=20

[streammux]
gpu-id=0
#Note: when used with [source-list], batch-size is ignored
#instead, max-batch-size config is used
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=30000

Set muxer output width and height

width=960
height=544
#enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

If set to TRUE, system timestamp will be attached as ntp timestamp

If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached

attach-sys-ts-as-ntp=1

drop-pipeline-eos ignores EOS from individual streams muxed in the DS pipeline

It is useful with source-list/use-nvmultiurisrcbin=1 where the REST server

will be running post last stream EOS to accept new streams

drop-pipeline-eos=1
##Boolean property to inform muxer that sources are live
##When using nvmultiurisrcbin live-source=1 is preferred default
##to allow batching of available buffers when number of sources is < max-batch-size configuration
live-source=1
attach-sys-ts-as-ntp=0
buffer-pool-size=4

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=1
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=1
msg-broker-conn-str=redis;6379;test
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_redis_proto.so
msg-conv-msg2p-new-api=0
msg-conv-frame-interval=1
msg-broker-config=/ds-config-files/yolov8s/cfg_redis.txt
msg-conv-payload-type=1
#multiple-payloads=1
source-id=0
sync=0
type=6
topic=test

[sink2]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4

only SW mpeg4 is supported right now.

codec=3
sync=1
bitrate=2000000
output-file=out.mp4
source-id=0

[sink3]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=4
#1=h264 2=h265
codec=1
encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0

set below properties in case of RTSPStreaming

rtsp-port=8555
udp-port=5511

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

config-file property is mandatory for any gie section.

Other properties are optional and if set will override the properties set in

the infer config file.

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV8_nx16.txt
model-engine-file=/yolov8s/model_b4_gpu0_int8.engine
batch-size=4
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0

[tracker]
enable=1
tracker-width=960
tracker-height=544
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
ll-config-file=config_tracker_NvDCF_PNv2.6_Interval_1_PVA.yml;config_tracker_NvDCF_PNv2.6_Interval_1_PVA.yml
sub-batches=2:2
gpu-id=0
display-tracking-id=1

[secondary-gie0]
enable=1
model-engine-file=/yolov8s/model/LPDNet_usa_pruned_tao5.onnx_b40_gpu0_int8.engine
gpu-id=0
batch-size=4
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_secondary_vehicletypes.txt

[tests]
file-loop=1

config_infer_primary_yoloV8_nx16.txt
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=/yolov8s/yolov8s-dependencies/yolov8s.onnx
model-engine-file=/yolov8s/model_b4_gpu0_int8.engine
int8-calib-file=/yolov8s/calib.table
labelfile-path=labels.txt
batch-size=4
network-mode=1
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#workspace-size=2000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=/yolov8s-files/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.5
pre-cluster-threshold=0.25
topk=300

config_infer_secondary_vehicletypes.txt
[property]
gpu-id=0
#model-color-format=0
net-scale-factor=0.0039215697906911373
tlt-model-key=nvidia_tlt
onnx-file=/yolov8s/model/LPDNet_usa_pruned_tao5.onnx
model-engine-file=/yolov8s/model/LPDNet_usa_pruned_tao5.onnx_b40_gpu0_int8.engine
int8-calib-file=/yolov8s/model/usa_cal_8.6.1.bin
labelfile-path=/yolov8s/model/labels_lpdnet.txt
uff-input-dims=3;480;640;0
uff-input-blob-name=input_1
batch-size=16

0=FP32, 1=INT8, 2=FP16 mode

network-mode=1
num-detected-classes=1
##1 Primary 2 Secondary
process-mode=2
interval=0
gie-unique-id=2
#0 detector 1 classifier 2 segmentatio 3 instance segmentation
network-type=0
operate-on-gie-id=1

operate-on-class-ids=0

cluster-mode=3
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
input-object-min-height=30
input-object-min-width=40
#enable-dla=1
#is-classifier=1

[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

I don’t know what i’m missing here? also my label.txt file contains only one class: lpd

Move to JPS forum.

We have LPD/LPR app here: GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream
You can refer this app.

i’ve followed this repo to deploy SGIE but now i want to run the entire app, is there a docker image for it? i can’t find one on the repo

We haven’t docker image for LPR app currently.

I was able to use LPD model with deepstream test 5, is it possible to use lpr model with it? or do i have to deploy this app specifically?

Can you try to integrate LPR model as one SGIE in deepstream test 5?

actually it worked now, i have 3 models in my pipeline, yolov8s as PGIE and LPD & LPR as SGIE, although i’m trying to rebuild test5 so it would save the license plate result in redis

Yes, you need customize event message meta to add the needed metadata to message broker.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.