How to deploy a detector and a classifier models which are both processing on every full frame

Hi,
Now I tried to deploy my two models in deepstream using deepstream-app program. One model is my primary detector with high sensitivity and I want to make it process every video frame. Another model is a classifier to avoid some invalid input frames (not a normal working condition for primary detector) to reduce some false positive results. This classifier model is also working on every full input frame rather than the bbox from detector.

Now I have two options. If I set the detector as the pgie element and set the classifier as the sgie element. How should I obtain my classifier result from the NvDsFrameMeta struct when there is no NvDsObjectMeta list ? If I set the detector as the sgie element and set the classifier as the pgie element, It seems like the detector is not working and only classifier is working due to very high FPS.

To help me find out the problem, I showed my config files as below:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1

[tiled-display]
enable=1
rows=1
columns=1
width=1920
height=1080
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=0
type=3
#uri=file:///opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_1080p_h264.mp4
uri=file:///home/js/disk_data/1.avi
#if the type = 3, the num-source will indicate the number of URIs.
num-sources=1
#gpu-id=0
#cudadec-memtype=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI
type=1
camera-width=1920
camera-height=1080
camera-fps-n=60
camera-fps-d=1
camera-v4l2-dev-node=0
gpu-id=0
cudadec-memtype=0

[sink0]
enable=1
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=3
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=3000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
output-file=out.mp4
source-id=0
gpu-id=0

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
#config-file=config_infer_primary_onnx.txt
config-file=config_infer_secondary.txt

[tests]
file-loop=1

[secondary-gie0]
enable=1
gpu-id=0
batch-size=1
gie-unique-id=4
operate-on-gie-id=1
#operate-on-class-ids=0;1;2;
config-file=config_infer_primary_onnx.txt
#config-file=config_infer_secondary.txt

################################################################################

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
#custom-network-config=/home/js/endoscopy_AI_0630/EndosAssist/opt/nvidia/deepstream/deepstream-5.1/sources/yolov4_c/yolov4-jctest.cfg
#model-file=/home/js/endoscopy_AI_0630/EndosAssist/opt/nvidia/deepstream/deepstream-5.1/sources/yolov4_c/yolov4-jctest_best01.weights
#model-engine-file=yolov4_fp16_c_608.engine
model-engine-file=1105_chang_onnx.engine
#int8-calib-file=calib.table
labelfile-path=class_c.txt
batch-size=1
network-mode=2
num-detected-classes=1
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=4
maintain-aspect-ratio=0
parse-bbox-func-name=NvDsInferParseCustomYoloV4
custom-lib-path=nvdsinfer_custom_impl_Yolo1/libnvdsinfer_custom_impl_Yolo_c.so
#engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
pre-cluster-threshold=0.25

################################################################################

[property]
gpu-id=0
net-scale-factor=1
#onnx-file=mobilenetv3.onnx
model-engine-file=MobilenetV3_256.engine
#int8-calib-file=…/…/models/Secondary_CarMake/cal_trt.bin
#mean-file=…/…/models/Secondary_CarMake/mean.ppm
labelfile-path=labels.txt
force-implicit-batch-dim=1
batch-size=1
model-color-format=1

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2
process-mode=1
is-classifier=1
output-blob-names=softmax_1.tmp_0
classifier-async-mode=0
classifier-threshold=0.51
operate-on-gie-id=1
scaling-filter=1
scaling-compute-hw=1

maintain-aspect-ratio=0

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

**• Hardware Platform (Jetson / GPU) GPU
**• DeepStream Version 5.0

Can you have a try to set the two nvinfer to PGIE?

Well, I have checked the source code of the deepstream-app and I found out the process-mode attribute for sgie is a fixed value “process-mode=2”, Now I changed this value and try my option 1 again, everything worked very well. Thanks a lot for your helps.