Is it possible to run two yolo model at a time as combination of two primary gie in deepstream 6.2 container ?
I want to run two yolo model without reference of back to back detector.
What are the purpose of the two Yolo models? Are they all detectors to detect different classes of objects?
You can run multiple PGIEs in one pipeline, there is no limitation of how many PGIEs in one pipeline. The only thing you need to consider is the GPU loading.
the purpose of two yolo models is to detect different class of objects
both model are trained on different classes
what do you mean by GPU loading? How can i implement this ?
If your GPU capability allow you to run two Yolo models, there is no other limitations with software.
You can just puts two PGIEs into your pipeline such as “source → nvstreammux → PGIE0 → PGIE1 → nvmultistreamtiler → nvosd → sink” .
The only thing you need to pay attention to is that you need to assign different “gie-unique-id” to different PGIE with the gst-nvinfer configuration files. Gst-nvinfer — DeepStream 6.3 Release documentation
When you get the output NvDsObjectMeta to get the detection results, you will see different “unique_component_id” in NvDsObjectMeta(NVIDIA DeepStream SDK API Reference: _NvDsObjectMeta Struct Reference | NVIDIA Docs). The “unique_component_id” will tell you which PGIE output this object.
I added two PGIE0 → PGIE1 in pipeline , but i can see detection logs only from the model which is placed inside PGIE1 .
the model which is placed inside PGIE0 , seems not giving any result.
i want detection from both model separately which are placed inside respective PGIEs
Can you tell me what i m missing ?
Have you tested with the two models separately?
yes , i tested separately both model works in terms of detection
but while keeping modle inside primary gie , PGIE0 and PGIE1 ,
only the model which is inside PGIE 1 works
PGI0 model not giving any result
Can you put the nvinfer configurations of your PGIE0 and PGIE1 here?
How did you view the result?
[primary-gie] # PGIE 0
enable=1
gpu-id=0
#model-engine-file=model_b1_gpu0_int8.engine
labelfile-path=rac_labels.txt
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_rac.txt
[primary-gie] # PGIE 1
enable=1
gpu-id=0
#model-engine-file=model_b1_gpu0_int8.engine
labelfile-path=labels.txt
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=2
nvbuf-memory-type=0
config-file=config_infer_primary_rac_1.txt
I viewed results in kafka logs ,by enabling kafka sink
Can you view the result with direct display? Or save the video file by filesink?
Please write your own pipeline or you need to modify the deepstream-app source code. Current deepstream-app implementation only support single PGIE.
direct display
and also i m observing logs which comes in kafka terminal after enabling kafka sink
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
Current deepstream-app implementation only support single PGIE. Please write your own pipeline or you need to modify the deepstream-app source code.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.