Deepstream running multiple model engines at the same time problem

**• Hardware Platform (GPU) RTX 3060
**• DeepStream Version 6.3
• TensorRT Version
**• NVIDIA GPU Driver Version (valid for GPU only) CUDA12.0
• Issue Type (questions)

I have used back to back detectors but second detector runs on detected frames of first detector.
How could both the detectors run on full frame?
I have set process-mode=0, but I cannot execute it successfully.

[primary-gie]
enable=1
gpu-id=0
batch-size=4
#Required by the app for OSD, not a plugin property
#bbox-border-color0=1;0;0;1
#bbox-border-color1=0;1;1;1
#bbox-border-color2=0;0;1;1
#bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=/deepstream/config/config_infer_primary_yoloV7.txt

[secondary-gie0]
enable=1
gpu-id=0
batch-size=4
gie-unique-id=2
operate-on-gie-id=1
#operate-on-class-ids=0;
config-file=/deepstream/config/secondary_gie_config.txt

secondary_gie_config.txt

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
model-engine-file=/deepstream/config/best.engine
#int8-calib-file=calib.table
labelfile-path=/deepstream/config/labels.txt
batch-size=4
network-mode=2
num-detected-classes=12
interval=0
gie-unique-id=1
process-mode=0
network-type=0
cluster-mode=2
maintain-aspect-ratio=1

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

If the detector runs on full frame, it is primary GIE. No matter how many detectors in your pipeline, if they infer on full frames, they are all primary GIEs.

Seems you are working with deepstream-app sample app DeepStream Reference Application - deepstream-app — DeepStream documentation 6.4 documentation. This sample do not support multiple PGIEs. Please write your own pipeline and config both detectors as PGIEs.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.