Two back to back PGIEs not working using config txt

In the stream config I have two [primary-gie]s and each point to a different config-file. But deepstream-app is only loading the 2nd one and ignoring the 1st one.

@abrar.shahriar

Multiple [primary-gie] is not allowed but you can configure multiple [secondary-gie].

I am using a custom parser inside ds example gst plugin for PGIE. I am also using a full frame SGIE(yolo). But I cannot see the output of the yolo SGIE.

@abrar.shahriar

I need more detailed information. Could you please show me your working dirs and all DeepStream configuration text files?

# Copyright (c) 2018 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=512
height=512

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI
type=2 
#uri=file:/workspace/DeepStream.X11-unix/TLT_DEMO/ds_testimage512.mp4
#uri=file:/workspace/DeepStream.X11-unix/TLT_DEMO/ds_config/ds_vid.mp4
uri=file:///workspace/DeepStream.X11-unix/TLT_DEMO/MyVid.11.mp4
num-sources=1
drop-frame-interval=0
#camera-width=640
#camera-height=480
#camera-fps-n=30
#camera-v4l2-dev-node=1
#camera-id=0
#camera-fps-d=1
gpu-id=0
# (0): memtype_device   - Memory type Devqice
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=1
sync=0
display-id=0
offset-x=0
offset-y=0
width=0
height=0
overlay-id=1
source-id=0

[sink1]
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
codec=1
sync=0
bitrate=2000000
output-file=outOBJECT_FP16.mp4
source-id=0

[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[osd]
enable=1
border-width=2
text-size=0
text-color=1;1;1;1;
#text-bg-color=0.3;0.3;0.3;1
text-bg-color=0.0;0.0;0.0;1
font=Serif
show-clock=0
#clock-x-offset=800
#clock-y-offset=820
clock-text-size=3
clock-color=1;0;0;0

[streammux]
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=512
height=512
enable-padding=1

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.



[secondary-gie0]
enable=1
gpu-id=0

batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
interval=0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=16
operate-on-gie-id=1
config-file=/workspace/DeepStream.X11-unix/TLT_DEMO/ds_configs/primary_inference_object_detection.txt
#raw-output-filewrite=1
#infer-raw-output-dir=./meta/


[primary-gie]
enable=1

batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
interval=0
#Required by the app for SGIE, when used along with config-file property

gie-unique-id=1

config-file=primary_inference_test.txt
#raw-output-filewrite=1
#infer-raw-output-dir=./meta/




[ds-example]
enable=1
processing-width=512
processing-height=512
full-frame=1
#batch-size for batch supported optimized plugin
 batch-size=1
unique-id=15
gpu-id=0


[tests]
file-loop=0








[tracker]
enable=0
tracker-width=512
tracker-height=512
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_iou.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_nvdcf.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=0
#enable-batch-process applicable to DCF only
enable-batch-process=0

for SGIE # Copyright © 2018 NVIDIA Corporation. All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

# Following properties are mandatory when engine files are not specified:
#   int8-calib-file(Only in INT8)
#   Caffemodel mandatory properties: model-file, proto-file, output-blob-names
#   UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
#   ONNX: onnx-file
#
# Mandatory properties for detectors:
#   num-detected-classes
#
# Optional properties for detectors:
#   enable-dbscan(Default=false), interval(Primary mode only, Default=0)
#   custom-lib-path,
#   parse-bbox-func-name
#
# Mandatory properties for classifiers:
#   classifier-threshold, is-classifier
#
# Optional properties for classifiers:
#   classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
#   operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
#   input-object-min-width, input-object-min-height, input-object-max-width,
#   input-object-max-height
#
# Following properties are always recommended:
#   batch-size(Default=1)
#
# Other optional properties:
#   net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
#   model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
#   mean-file, gie-unique-id(Default=0), offsets, gie-mode (Default=1 i.e. primary),
#   custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#net-scale-factor=0.004

#model-engine-file=ResNet50GheadLayer50.etlt_b1_gpu0_fp16.engine
model-engine-file=model_b1_gpu0_fp16.engine
#custom-network-config=/workspace/TLT_DEMO/ds_configs/yolov3.cfg
#model-file=/workspace/Downloads/TLT_DEMO/ds_configs/yolov3.weights
engine-create-func-name=NvDsInferYoloCudaEngineGet
custom-lib-path=libnvdsinfer_custom_impl_Yolo.so
parse-bbox-func-name=NvDsInferParseCustomYoloV3
#labelfile-path=labels.txt
#int8-calib-file=calibration.bin
batch-size=1
#input-dims=3;512;512;0
process-mode=1
gie-mode=1
operate-on-gie-id=1

model-color-format=0
# 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=1
interval=0
network-type=0
gie-unique-id=16
unique-id=16

#output-tensor-meta=0
#infer-raw-output-dir=./meta/
#output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
is-classifier=0
#uff-file=hao28-600000-256x384.uff
#tlt-encoded-model=ResNet50GheadLayer50_1000_epoch.etlt

#uff-input-dims=3;512;512;0
#uff-input-blob-name=input_1
#parse-bbox-func-name=NvDsInferParseCustomResnet
#custom-lib-path=nvdsinfer_customparser/libnvds_infercustomparser.so
maintain-aspect-ratio=0



[class-attrs-all]
threshold=0.1
group-threshold=1

## Set eps=0.7 and minBoxes for enable-dbscan=1
eps=0.2
minBoxes=0
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

## Per class configuration
#[class-attrs-0]
#threshold=0.3
#eps=0.2
#group-threshold=1
#roi-top-offset=0
#roi-bottom-offset=0
#detected-min-w=0
#detected-min-h=0
#detected-max-w=0
#detected-max-h=0
txt for PGIE

# Copyright (c) 2018 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

# Following properties are mandatory when engine files are not specified:
#   int8-calib-file(Only in INT8)
#   Caffemodel mandatory properties: model-file, proto-file, output-blob-names
#   UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
#   ONNX: onnx-file
#
# Mandatory properties for detectors:
#   num-detected-classes
#
# Optional properties for detectors:
#   enable-dbscan(Default=false), interval(Primary mode only, Default=0)
#   custom-lib-path,
#   parse-bbox-func-name
#
# Mandatory properties for classifiers:
#   classifier-threshold, is-classifier
#
# Optional properties for classifiers:
#   classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
#   operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
#   input-object-min-width, input-object-min-height, input-object-max-width,
#   input-object-max-height
#
# Following properties are always recommended:
#   batch-size(Default=1)
#
# Other optional properties:
#   net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
#   model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
#   mean-file, gie-unique-id(Default=0), offsets, gie-mode (Default=1 i.e. primary),
#   custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#net-scale-factor=0.004
model-engine-file=Test_b1_gpu0_fp16.plan



#labelfile-path=labels.txt
#int8-calib-file=calibration.bin
batch-size=1
#input-dims=3;512;512;0
process-mode=1
model-color-format=0
# 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
#num-detected-classes=1
interval=0
network-type=100
gie-unique-id=1
unique-id=1

output-tensor-meta=1

#output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
is-classifier=0
#uff-file=hao28-600000-256x384.uff
#tlt-encoded-model=ResNet50_1000_INT8.etlt

#uff-input-dims=3;512;512;0
#uff-input-blob-name=input_1
#parse-bbox-func-name=NvDsInferParseCustomResnet
#custom-lib-path=nvdsinfer_customparser/libnvds_infercustomparser.so
maintain-aspect-ratio=0





[class-attrs-all]
threshold=0.1
group-threshold=1

## Set eps=0.7 and minBoxes for enable-dbscan=1
eps=0.2
minBoxes=0
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

## Per class configuration
#[class-attrs-0]
#threshold=0.3
#eps=0.2
#group-threshold=1
#roi-top-offset=0
#roi-bottom-offset=0
#detected-min-w=0
#detected-min-h=0
#detected-max-w=0
#detected-max-h=0

@abrar.shahriar

Add file path information to each of your post like: /path/to/your/dir/configure_file_name.txt

All txt files are inside

/workspace/DeepStream.X11-unix/TLT_DEMO/ds_config/

My PGIE is not a detector. It is a custom model so network-type is 100.

My problem will be solved if I switch YOLO to PGIE and my custom model to SGIE full frame. But I do not know how to get tensor meta of SGIE in dsexample Gstreamer plugin( currently I use this meta->output_layers_info[0].buffer;).

@abrar.shahriar

Sorry for late response.
I think this stream can be closed.
Try to open new issues if you still have problems.
Please provide more details of your system information and your configurations each time you open a new issue.

1 Like