May DeepStarem load multiple primary-gie or use secondary-gie0 for origin source?

I want to run two detector model for same frame (inception v2 + custom detection model),
But primary-gie only load the last one model
Example

{deepstream_app_config.txt}
[primary-gie]
enable=1
gpu-id=0
model-engine-file=inception ..

[primary-gie]
enable=1
gpu-id=0
model-engine-file=customs detection ..

this result only show custom model result

And I us secondary-gie0 to connect primary-gie
{deepstream_app_config.txt}

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
model-engine-file=inception ..

[secondary-gie0]
enable=1
gpu-id=0
operate-on-gie-id=1
gie-unique-id=2
model-engine-file=customs detection ..

It’s seem secondary-gie0 result is based on inception detection result

Please wait deepstream 4.0 for your case. 4.0 will be released in the mid of this month.

Hi,
Is there any sample in deepstream 4.0??? How can I achieve this?

Hi
You can get here: https://developer.nvidia.com/deepstream-sdk
Thhe sample is in the tar package.

Hi,
I couldn’t find any sample with two priamry gie.

in gstnvinfer.cpp

#define PROCESS_MODEL_FULL_FRAME 1
#define PROCESS_MODEL_OBJECTS 2

....

    case PROP_PROCESS_MODE:
    {
      guint val = g_value_get_enum (value);
      nvinfer->process_full_frame = (val == PROCESS_MODEL_FULL_FRAME);
    }

You can set your second gie config:

[property]
...
process-mode=1
...
[primary-gie]
process-mode=1
enable=1
#model-engine-file=../../models/Secondary_CarColor/resnet18.caffemodel_b16_int8.engine
labelfile-path=../../models/facenet-120/class_labels.txt
batch-size=8
gpu-id=0
bbox-border-color0=1;1;1;1
bbox-border-color1=1;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
gie-unique-id=6
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=face.txt

like this ?

Hi Chris,

Do we need to set,

<b>[primary-gie]</b>
enable=1
gpu-id=0
#model-engine-file=model_b1_fp32.engine
labelfile-path=label.txt
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
gie-unique-id=1
nvbuf-memory-type=0
config-file=config1.txt

<b>[secondary-gie0]</b>
enable=1
gpu-id=0
#model-engine-file=model_b1_fp32.engine
labelfile-path=label.txt
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
gie-unique-id=2
operate-on-gie-id=1
nvbuf-memory-type=0
config-file=config2.txt

and in
config2.txt

<b>[property]</b>
process-mode=1
.......

I tried this but it is not working as you said.

Can you give more clarity on this?

Do you mean your 2 models “inception v2” and “custom detection” are independent and both inference in full frame ?

Dear chris

I am planning to detect cars and draw a bounding box around it via yolov3 tiny. then I will be detecting plates (and draw bounding boxes around them) inside the detected cars in the previous classifier. Ultimately I will be recognizing characters inside the detected plates. as you can imagine, I need an approach in which I will be cascading different Yolo samples, drawing bounding boxes (of plate location) inside othe bounding boxes (of car locations). is deepstream flexible enough to help me do that without having to modify the cp libraries, or I can do this only via changing the high level config.txt files?

Do you mean your pipeline is like the below ? You have 3 models?
“Recognize characters” model also can be deploied by tensorrt ?

src → decoder → streammux → car detection (Crop car) → License plate detection (crop LP)-> recognize characters → …

Hi chris,

barzanhayati probably meant it. I have a similar problem. Is it possible to cascade two primary detector via nvtracker.

Yes.
You can refer to deepstream_reference_apps/back-to-back-detectors at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub and add nvtracker after.

Hey Chris.
So I am using two object detectors. The first detects people and the second detects faces.
I set my pipeline like this: persondetector → facedetector → nvtracker.
However, the tracker seems to override the Face Label to Person.

I need to Detect People and run a classifier on them. I also need to detect their faces and classify their age and gender. I have all the models ready but can’t seem to plug them properly due to the aforementioned issue.
How do I go about this?

I think the pipeline is like this:

PersonDetector → classify person
----------------> face Detector →
---------------------> age classify
---------------------> gender classify

Can you add nvtracker after personDetector or do not use nvtracker. You can drop frames in deocder:
https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream_Plugin_Manual%2Fdeepstream_plugin_details.02.12.html → drop-frame-interval