LPD training model not OK

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson )
• DeepStream Version:6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I use detectnet_v2 to train lpd with my own data, In tao verify model, it is ok, such as:


But this model did not work when I deploy it in deepstream, my lpd config is:

[property]
#gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
labelfile-path=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Secondary_PlateDetc/ccpd_label.txt
tlt-encoded-model=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Secondary_PlateDetc/ccpd_pruned.etlt
model-engine-file = /opt/nvidia/deepstream/deepstream-6.0/samples/models/Secondary_PlateDetc/ccpd_pruned.etlt_b4_gpu0_int8.engine
int8-calib-file=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Secondary_PlateDetc/ccpd_cal.bin
tlt-model-key=tlt_encode
infer-dims=3;384;1248
#uff-input-dims=3;384;1248;0
uff-input-blob-name=input_1
batch-size = 4
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=1
##1 Primary 2 Secondary
process-mode=2
interval=0
# 0 detector 1 classifier 2 segmentatio 3 instance segmentation
network-type=0
#no cluster
cluster-mode=3
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
# 4096*2160=450*350    2560*1440=250*150
input-object-min-width=250
input-object-min-height=150
#GPU:1  VIC:2(Jetson only)
scaling-compute-hw=1
#enable-dla=1

[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=66
detected-min-h=22
detected-max-w=0
detected-max-h=0

Plesase help me, thank you.

Does the “not work” mean cannot detect the plate at all? Please share more information about deepstream config and pipeline, thanks.

Yes, it can not detect plate.
The deepstream is:
input: two rtsp streams, resolution:4096X2160
streammux: 2560X1440
primary detect: detect car, person and etc.

The deepstream main config is:

[application]
enable-perf-measurement = 1
perf-measurement-interval-sec = 5

[tiled-display]
enable = 0
rows = 2
columns = 2
width = 2560
height = 1440
gpu-id = 0
nvbuf-memory-type = 0

[source0]
enable = 1
type = 4
uri = rtsp://192.168.170.63:8554/mystream9
intra-decode-enable = 0
num-extra-surfaces = 4
camera-id = 0
drop-frame-interval = 3
select-rtp-protocol = 4
rtsp-reconnect-interval-sec = 10
rtsp-reconnect-attempts = 10

[source1]
enable = 1
type = 4
uri = rtsp://192.168.170.63:8554/mystream10
intra-decode-enable = 0
num-extra-surfaces = 4
camera-id = 1
drop-frame-interval = 3
select-rtp-protocol = 4
rtsp-reconnect-interval-sec = 10
rtsp-reconnect-attempts = 10

[streammux]
live-source = 1
buffer-pool-size = 4
batch-size = 2
batched-push-timeout = 40000
width = 2560
height = 1440
compute-hw = 1
enable-padding = 1
nvbuf-memory-type = 0
attach-sys-ts-as-ntp = 0

[primary-gie]
enable = 1
config-file = pgie_yolo_cfg.txt
batch-size = 2
gie-unique-id = 1
interval = 0
input-tensor-meta = 0
bbox-border-color0 = 0;1;0;1
bbox-border-color1 = 0;1;1;1
bbox-border-color2 = 0;0;1;1
bbox-border-color3 = 1;0;1;1
bbox-border-color4 = 1;0;0;1
bbox-border-color5 = 1;0;0;1
bbox-border-color6 = 1;0;0;1
bbox-border-color7 = 0;1;0;1
bbox-border-color8 = 1;0;0;1
bbox-border-color9 = 0;1;0;1
bbox-border-color10 = 1;1;0;1

The primary detect config is:


[property]
num-detected-classes=80
net-scale-factor=0.0039215697906911373
#batch-size=4
#tensor-meta-pool-size=8
model-engine-file=/opt/nvidia/deepstream/ds-app/ds-engine/vehicle.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/labels_rgl.txt
gie-unique-id=1
#interval=2
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
output-blob-names=prob
parse-bbox-func-name=NvDsInferParseCustomYoloV5
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvdsinfer_custom_impl_Yolo_traffic_parti.so
# Integer 0: RGB 1: BGR 2: GRAY
model-color-format=0
# Integer 1=Primary 2=Secondary
process-mode=1
#classifier-threshold=0.4
# Integer  0: Detector  1: Classifier  2: Segmentation  3: Instance Segmentation
network-type=0
maintain-aspect-ratio=1
#symmetric-padding=1
force-implicit-batch-dim=1
## 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
#cluster-mode=2
#scaling-filter=0
# Integer 0: Platform default – GPU (dGPU), VIC (Jetson) 1: GPU 2: VIC (Jetson only)
scaling-compute-hw=1
# Integer 0:NCHW 1:NHWC
#network-input-order=1

#Use the config params below for dbscan clustering mode
#[class-attrs-all]
#detected-min-w=4
#detected-min-h=4
#minBoxes=3

#Use the config params below for NMS clustering mode
[class-attrs-all]
topk=20
nms-iou-threshold=0.35
pre-cluster-threshold=0.35

## Per class configurations
[class-attrs-0]
topk=20
nms-iou-threshold=0.5
pre-cluster-threshold=0.4

#[class-attrs-1]
#pre-cluster-threshold=0.05
#eps=0.7
#dbscan-min-score=0.5

#[class-attrs-2]
#pre-cluster-threshold=0.1
#eps=0.6
#dbscan-min-score=0.95

#[class-attrs-3]
#pre-cluster-threshold=0.05
#eps=0.7
#dbscan-min-score=0.5

[class-attrs-7]
topk=20
nms-iou-threshold=0.5
pre-cluster-threshold=0.4

Thank you very much.

Could you give some advice?

Hi @huihui308 , In your deepstream main config, I cannot see the osd plugin. We use osd plugin to draw the bbox. So could you have a try to add this plugin to your config file? Thanks

I forget to put osd config to paste,

[application]
enable-perf-measurement = 1
perf-measurement-interval-sec = 5

[tiled-display]
enable = 0
rows = 2
columns = 2
width = 2560
height = 1440
gpu-id = 0
nvbuf-memory-type = 0

[source0]
enable = 1
type = 4
uri = rtsp://192.168.170.63:8554/mystream9
intra-decode-enable = 0
num-extra-surfaces = 4
camera-id = 0
drop-frame-interval = 3
select-rtp-protocol = 4
rtsp-reconnect-interval-sec = 10
rtsp-reconnect-attempts = 10

[source1]
enable = 1
type = 4
uri = rtsp://192.168.170.63:8554/mystream10
intra-decode-enable = 0
num-extra-surfaces = 4
camera-id = 1
drop-frame-interval = 3
select-rtp-protocol = 4
rtsp-reconnect-interval-sec = 10
rtsp-reconnect-attempts = 10

[streammux]
live-source = 1
buffer-pool-size = 4
batch-size = 2
batched-push-timeout = 40000
width = 2560
height = 1440
compute-hw = 1
enable-padding = 1
nvbuf-memory-type = 0
attach-sys-ts-as-ntp = 0

[primary-gie]
enable = 1
config-file = pgie_yolo_cfg.txt
batch-size = 2
gie-unique-id = 1
interval = 0
input-tensor-meta = 0
bbox-border-color0 = 0;1;0;1
bbox-border-color1 = 0;1;1;1
bbox-border-color2 = 0;0;1;1
bbox-border-color3 = 1;0;1;1
bbox-border-color4 = 1;0;0;1
bbox-border-color5 = 1;0;0;1
bbox-border-color6 = 1;0;0;1
bbox-border-color7 = 0;1;0;1
bbox-border-color8 = 1;0;0;1
bbox-border-color9 = 0;1;0;1
bbox-border-color10 = 1;1;0;1


[osd]
enable = 1
border-width = 2
text-size = 20
text-color = 1;1;1;1;
text-bg-color = 0.3;0.3;0.3;1
show-clock = 1
clock-text-size = 32
clock-x-offset = 60
clock-y-offset = 70
font = Serif
clock-color = 1;0;0;0
display-text = 1
display-bbox = 1
display-mask = 0

The cars have bbox.

Your config file has no SGIE. You can refer this for a correct config. Thanks
https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app

I am ver sorry, I forget to paste it.

[application]
enable-perf-measurement = 1
perf-measurement-interval-sec = 5

[tiled-display]
enable = 0
rows = 2
columns = 2
width = 2560
height = 1440
gpu-id = 0
nvbuf-memory-type = 0

[source0]
enable = 1
type = 4
uri = rtsp://192.168.170.63:8554/mystream9
intra-decode-enable = 0
num-extra-surfaces = 4
camera-id = 0
drop-frame-interval = 3
select-rtp-protocol = 4
rtsp-reconnect-interval-sec = 10
rtsp-reconnect-attempts = 10

[source1]
enable = 1
type = 4
uri = rtsp://192.168.170.63:8554/mystream10
intra-decode-enable = 0
num-extra-surfaces = 4
camera-id = 1
drop-frame-interval = 3
select-rtp-protocol = 4
rtsp-reconnect-interval-sec = 10
rtsp-reconnect-attempts = 10

[streammux]
live-source = 1
buffer-pool-size = 4
batch-size = 2
batched-push-timeout = 40000
width = 2560
height = 1440
compute-hw = 1
enable-padding = 1
nvbuf-memory-type = 0
attach-sys-ts-as-ntp = 0

[primary-gie]
enable = 1
config-file = pgie_yolo_cfg.txt
batch-size = 2
gie-unique-id = 1
interval = 0
input-tensor-meta = 0
bbox-border-color0 = 0;1;0;1
bbox-border-color1 = 0;1;1;1
bbox-border-color2 = 0;0;1;1
bbox-border-color3 = 1;0;1;1
bbox-border-color4 = 1;0;0;1
bbox-border-color5 = 1;0;0;1
bbox-border-color6 = 1;0;0;1
bbox-border-color7 = 0;1;0;1
bbox-border-color8 = 1;0;0;1
bbox-border-color9 = 0;1;0;1
bbox-border-color10 = 1;1;0;1

[secondary-gie0]
enable = 1
config-file = sgie0_lpd_cfg.txt
gie-unique-id = 2
operate-on-gie-id = 1
operate-on-class-ids = 4;5;6
gpu-id=0
batch-size=16
model-engine-file = /opt/nvidia/deepstream/deepstream-6.0/samples/models/Secondary_PlateDetc/ccpd_pruned.etlt_b4_gpu0_int8.engine

[osd]
enable = 1
border-width = 2
text-size = 20
text-color = 1;1;1;1;
text-bg-color = 0.3;0.3;0.3;1
show-clock = 1
clock-text-size = 32
clock-x-offset = 60
clock-y-offset = 70
font = Serif
clock-color = 1;0;0;0
display-text = 1
display-bbox = 1
display-mask = 0

When I use ccpd pruned 1.0 model from ‘LPDNet | NVIDIA NGC’, th lpd work well, but when I use my train model, it do not work. but this model inference well in tao.

Anyone can help you?thank you.

Hi @huihui308 , How do you run in TAO with your model?
Also, You can try these ways below first:
1.run the demo below in your env, and see if it works well:
https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app
2.use the model in the demo, and see if it works well in deepstream.

In tao, when I finished traing, it can inference testing image by training model, lpd works well.
When I use ccpd pruned 1.0 model from ‘LPDNet | NVIDIA NGC’, th lpd work well, but when I use my train model, it do not work. but this model inference well in tao.

Hi @huihui308 , Yeah, cause your model is trained by TAO. It should work well in deepstream.
I think your problem is that you are not very familiar with deepstream. You can read the guide below to learn how to use source, streamux, infer, tracker, pgie, sgie to do a test. Thanks
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_ref_app_deepstream.html
Also, you can refer the config file that our code provided, and just change the source and config file path, model to your own:

/opt/nvidia/deepstream/deepstream-X.X/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt

Thank you very much.
My deepstream work well when I use ‘LPDNet | NVIDIA NGC’ lpd, but when I only replace etlt and int8-calib-file, lpd do not work.
But this etlt and int8-calib-file work well in tao, this is my question.

Did you write the config file by the guide License Plate Detection (LPDNet) Model Card. I don’t see a tracker in your config file.
Or can you attach your whole files (config file, our model that can work well, your model that cannot work well) and all the commands you have used.

OK, I think tracker is not impossible for the lpd.
My deepstream main config is:

[application]
enable-perf-measurement = 1
perf-measurement-interval-sec = 5

[tiled-display]
enable = 0
rows = 2
columns = 2
width = 2560
height = 1440
gpu-id = 0
nvbuf-memory-type = 0

[source0]
enable = 1
type = 4
uri = rtsp://192.168.170.63:8554/mystream9
intra-decode-enable = 0
num-extra-surfaces = 4
camera-id = 0
drop-frame-interval = 3
select-rtp-protocol = 4
rtsp-reconnect-interval-sec = 10
rtsp-reconnect-attempts = 10

[source1]
enable = 1
type = 4
uri = rtsp://192.168.170.63:8554/mystream10
intra-decode-enable = 0
num-extra-surfaces = 4
camera-id = 1
drop-frame-interval = 3
select-rtp-protocol = 4
rtsp-reconnect-interval-sec = 10
rtsp-reconnect-attempts = 10

[source2]
enable = 0
type = 4
uri = rtsp://192.168.170.63:8554/mystream11
intra-decode-enable = 0
num-extra-surfaces = 4
camera-id = 2
drop-frame-interval = 3
select-rtp-protocol = 4
rtsp-reconnect-interval-sec = 10
rtsp-reconnect-attempts = 10

[source3]
enable = 0
type = 4
uri = rtsp://192.168.170.63:8554/mystream12
intra-decode-enable = 0
num-extra-surfaces = 4
camera-id = 3
drop-frame-interval = 3
select-rtp-protocol = 4
rtsp-reconnect-interval-sec = 10
rtsp-reconnect-attempts = 10

[source4]
enable = 0
type = 4
uri = rtsp://192.168.170.63:8554/mystream204
intra-decode-enable = 0
num-extra-surfaces = 4
camera-id = 4
#drop-frame-interval = 2
select-rtp-protocol = 4
rtsp-reconnect-interval-sec = 10
rtsp-reconnect-attempts = 10

[source5]
enable = 0
type = 4
uri = rtsp://192.168.170.63:8554/mystream205
intra-decode-enable = 0
num-extra-surfaces = 4
camera-id = 5
#drop-frame-interval = 2
select-rtp-protocol = 4
rtsp-reconnect-interval-sec = 10
rtsp-reconnect-attempts = 10

[source6]
enable = 0
type = 4
uri = rtsp://192.168.170.63:8554/mystream206
intra-decode-enable = 0
num-extra-surfaces = 4
camera-id = 6
#drop-frame-interval = 2
select-rtp-protocol = 4
rtsp-reconnect-interval-sec = 10
rtsp-reconnect-attempts = 10

[source7]
enable = 0
type = 4
uri = rtsp://192.168.170.63:8554/mystream201
intra-decode-enable = 0
num-extra-surfaces = 4
camera-id = 7
#drop-frame-interval = 2
select-rtp-protocol = 4
rtsp-reconnect-interval-sec = 10
rtsp-reconnect-attempts = 10

[streammux]
live-source = 1
buffer-pool-size = 4
batch-size = 2
batched-push-timeout = 40000
width = 2560
height = 1440
compute-hw = 1
enable-padding = 1
nvbuf-memory-type = 0
attach-sys-ts-as-ntp = 0

[pre-process]
enable = 0
config-file = preprocess_cfg.txt

[primary-gie]
enable = 1
config-file = pgie_yolo_cfg.txt
batch-size = 2
gie-unique-id = 1
interval = 0
input-tensor-meta = 0
bbox-border-color0 = 0;1;0;1
bbox-border-color1 = 0;1;1;1
bbox-border-color2 = 0;0;1;1
bbox-border-color3 = 1;0;1;1
bbox-border-color4 = 1;0;0;1
bbox-border-color5 = 1;0;0;1
bbox-border-color6 = 1;0;0;1
bbox-border-color7 = 0;1;0;1
bbox-border-color8 = 1;0;0;1
bbox-border-color9 = 0;1;0;1
bbox-border-color10 = 1;1;0;1

[tracker]
enable = 0
tracker-width = 960
tracker-height = 544
ll-config-file = config_tracker_NvDCF_max_perf.yml
ll-lib-file = /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
enable-batch-process = 1
enable-past-frame = 1
display-tracking-id = 1
tracking-id-reset-mode = 3

[nvds-analytics]
enable = 0
config-file = config_nvdsanalytics.txt

[secondary-gie0]
enable = 1
config-file = sgie0_lpd_cfg.txt
gie-unique-id = 2
operate-on-gie-id = 1
operate-on-class-ids = 4;5;6

[secondary-gie1]
enable = 0
config-file = sgie1_lpr_cfg.txt
gie-unique-id = 3
operate-on-gie-id = 2
operate-on-class-ids = 0

[secondary-gie2]
enable = 0
config-file = sgie2_carcolor_cfg.txt
gie-unique-id = 4
operate-on-gie-id = 1
operate-on-class-ids = 4;5;6

[secondary-gie3]
enable = 0
config-file = sgie3_carmake_cfg.txt
gie-unique-id = 5
operate-on-gie-id = 1
operate-on-class-ids = 4;5;6

[secondary-gie4]
enable = 0
config-file = sgie4_vehicletypes_cfg.txt
gie-unique-id = 6
operate-on-gie-id = 1
operate-on-class-ids = 4;5;6

[osd]
enable = 1
border-width = 2
text-size = 20
text-color = 1;1;1;1;
text-bg-color = 0.3;0.3;0.3;1
show-clock = 1
clock-text-size = 32
clock-x-offset = 60
clock-y-offset = 70
font = Serif
clock-color = 1;0;0;0
display-text = 1
display-bbox = 1
display-mask = 0

[sink0]
enable = 1
type = 8
qos = 0
codec = 1
source-id = 0
enc-type = 0
sync = 0
bitrate = 15000000
profile = 0
link-to-demux = 0
output-file = rtmp://localhost/live/stream-0
width = 1920
height = 1080

[sink1]
enable = 1
type = 8
qos = 0
codec = 1
source-id = 1
enc-type = 0
sync = 0
bitrate = 15000000
profile = 0
link-to-demux = 0
output-file = rtmp://localhost/live/stream-1
width = 1920
height = 1080

[sink2]
enable = 1
type = 8
qos = 0
codec = 1
source-id = 2
enc-type = 0
sync = 0
bitrate = 15000000
profile = 0
link-to-demux = 0
output-file = rtmp://localhost/live/stream-2
width = 1920
height = 1080

[sink3]
enable = 1
type = 8
qos = 0
codec = 1
source-id = 3
enc-type = 0
sync = 0
bitrate = 15000000
profile = 0
link-to-demux = 0
output-file = rtmp://localhost/live/stream-3
width = 1920
height = 1080

[sink4]
enable = 1
type = 8
qos = 0
codec = 1
source-id = 4
enc-type = 0
sync = 0
bitrate = 15000000
profile = 0
link-to-demux = 0
output-file = rtmp://localhost/live/stream-4
width = 1920
height = 1080

[sink5]
enable = 1
type = 8
qos = 0
codec = 1
source-id = 5
enc-type = 0
sync = 0
bitrate = 15000000
profile = 0
link-to-demux = 0
output-file = rtmp://localhost/live/stream-5
width = 1920
height = 1080

[sink6]
enable = 1
type = 8
qos = 0
codec = 1
source-id = 6
enc-type = 0
sync = 0
bitrate = 15000000
profile = 0
link-to-demux = 0
output-file = rtmp://localhost/live/stream-6
width = 1920
height = 1080

[sink7]
enable = 1
type = 8
qos = 0
codec = 1
source-id = 7
enc-type = 0
sync = 0
bitrate = 15000000
profile = 0
link-to-demux = 0
output-file = rtmp://localhost/live/stream-7
width = 1920
height = 1080

[sink8]
enable = 1
type = 7
qos = 0
source-id = 0
enc-type = 0
profile = 0
msg-conv-config = dstest5_msgconv_sample_config.txt
msg-broker-proto-lib = /opt/nvidia/deepstream/deepstream-6.0/lib/libqf_nnmsg_proto.so
msg-broker-conn-str = tcp://0.0.0.0:9020
topic = data_test6
msg-conv-payload-type = 0
msg-conv-msg2p-new-api = 1
msg-conv-frame-interval = 1
nvbuf-memory-type = 0

[sink9]
enable = 1
type = 7
qos = 0
source-id = 1
enc-type = 0
profile = 0
msg-conv-config = dstest5_msgconv_sample_config.txt
msg-broker-proto-lib = /opt/nvidia/deepstream/deepstream-6.0/lib/libqf_nnmsg_proto.so
msg-broker-conn-str = tcp://0.0.0.0:9021
topic = data_test7
msg-conv-payload-type = 0
msg-conv-msg2p-new-api = 1
msg-conv-frame-interval = 1
nvbuf-memory-type = 0

[sink10]
enable = 1
type = 7
qos = 0
source-id = 2
enc-type = 0
profile = 0
msg-conv-config = dstest5_msgconv_sample_config.txt
msg-broker-proto-lib = /opt/nvidia/deepstream/deepstream-6.0/lib/libqf_nnmsg_proto.so
msg-broker-conn-str = tcp://0.0.0.0:9022
topic = data_test8
msg-conv-payload-type = 0
msg-conv-msg2p-new-api = 1
msg-conv-frame-interval = 1
nvbuf-memory-type = 0

[sink11]
enable = 1
type = 7
qos = 0
source-id = 3
enc-type = 0
profile = 0
msg-conv-config = dstest5_msgconv_sample_config.txt
msg-broker-proto-lib = /opt/nvidia/deepstream/deepstream-6.0/lib/libqf_nnmsg_proto.so
msg-broker-conn-str = tcp://0.0.0.0:9023
topic = data_test9
msg-conv-payload-type = 0
msg-conv-msg2p-new-api = 1
msg-conv-frame-interval = 1
nvbuf-memory-type = 0

[sink12]
enable = 1
type = 7
qos = 0
source-id = 4
enc-type = 0
profile = 0
msg-conv-config = dstest5_msgconv_sample_config.txt
msg-broker-proto-lib = /opt/nvidia/deepstream/deepstream-6.0/lib/libqf_nnmsg_proto.so
msg-broker-conn-str = tcp://0.0.0.0:9024
topic = data_test10
msg-conv-payload-type = 0
msg-conv-msg2p-new-api = 1
msg-conv-frame-interval = 1
nvbuf-memory-type = 0

[sink13]
enable = 1
type = 7
qos = 0
source-id = 5
enc-type = 0
profile = 0
msg-conv-config = dstest5_msgconv_sample_config.txt
msg-broker-proto-lib = /opt/nvidia/deepstream/deepstream-6.0/lib/libqf_nnmsg_proto.so
msg-broker-conn-str = tcp://0.0.0.0:9025
topic = data_test11
msg-conv-payload-type = 0
msg-conv-msg2p-new-api = 1
msg-conv-frame-interval = 1
nvbuf-memory-type = 0

[sink14]
enable = 1
type = 7
qos = 0
source-id = 6
enc-type = 0
profile = 0
msg-conv-config = dstest5_msgconv_sample_config.txt
msg-broker-proto-lib = /opt/nvidia/deepstream/deepstream-6.0/lib/libqf_nnmsg_proto.so
msg-broker-conn-str = tcp://0.0.0.0:9026
topic = data_test12
msg-conv-payload-type = 0
msg-conv-msg2p-new-api = 1
msg-conv-frame-interval = 1
nvbuf-memory-type = 0

[sink15]
enable = 1
type = 7
qos = 0
source-id = 7
enc-type = 0
profile = 0
msg-conv-config = dstest5_msgconv_sample_config.txt
msg-broker-proto-lib = /opt/nvidia/deepstream/deepstream-6.0/lib/libqf_nnmsg_proto.so
msg-broker-conn-str = tcp://0.0.0.0:9027
topic = data_test13
msg-conv-payload-type = 0
msg-conv-msg2p-new-api = 1
msg-conv-frame-interval = 1
nvbuf-memory-type = 0

[sink16]
enable = 0
type = 3
qos = 0
container = 1
codec = 1
enc-type = 0
sync = 0
bitrate = 15000000
profile = 0
output-file = out.mp4
source-id = 1
width = 1920
height = 1080

[tests]
file-loop = 0

[debug]
gst-debug = 3

OK, What you mean is that you can run “deepstream_app -c this_configfile” with our guide model, but when you change our model to you own model only, it cannot work well? Are you sure you changed the model only?
Cause I don’t have your rtsp stream source, model, and GIE config files, tracker config files. I cannot run your demo.
You can try the following methods by yourself:
1.You can run a simple demo firstly with one source and one sink.
2.I still recommand that you can run our lpr demo below to familiar with deepstream, and then you can use your own model.
https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app
Thanks

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.